• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

So, after all the hype, it turns out that it's the PC that had the real next-gen "secret sauce" all along?

SF Kosmo

Al Jazeera Special Reporter
DLSS 2.0 is generic and doesn't require per-game training.

Pretty much every ML project is using NVdia hardware these days no? Whenever I dabbled with ML, support for AMD hardware sucked and using the CPU was out of the question since it was much slower, maybe that has changed since then.

As I said before, the answer to DLSS doesn't need to be better than it, you just need to be around the same ball park.
3.0 is generic and doesn't require per game training and implementation. 2.0 is not.
 

SF Kosmo

Al Jazeera Special Reporter
Is DLSS applied after the image has been completely constructed? Could console manufactures add dedicated chips to upscale the image after it was rendered using ML just like the PS4 Pro has a dedicated ups calling chip (if I remember correctly)?

Could Nvidia sell to TV manufactures chips that use DLSS to upscale stuff? If this is possible, please make every game have a 1080p game modes and we will eventually be able to upscale them well.

Reading about it I think it relies on additional information being generated along the original image as well but possibly something that they could work around.
DLSS is performed "after" the image is rendered in that it isn't filling in an incomplete image like checkerboard rendering but it's not something that can be done strictly as a post process either, it requires motion vector data so it can use past rendered frames to enhance detail, kind of like TAA.

It's not gonna take a system like Switch and suddenly make it able to look great at 1080p, it can't reach a certain base level to begin with. It's best when going from 1080p or 1440p to 4K, when the source image is lower than that it gets a lot more obvious.
Wrong.

"While the original DLSS required per-game training, DLSS 2.0 offers a generalized AI network that removes the need to train for each specific game."

So I'm kind of confused about the differentiating claims made about 3.0 then.

 
Last edited:

VFXVeteran

Banned
DLSS is performed "after" the image is rendered in that it isn't filling in an incomplete image like checkerboard rendering but it's not something that can be done strictly as a post process either, it requires motion vector data so it can use past rendered frames to enhance detail, kind of like TAA.

It's not gonna take a system like Switch and suddenly make it able to look great at 1080p, it can't reach a certain base level to begin with. It's best when going from 1080p or 1440p to 4K, when the source image is lower than that it gets a lot more obvious.

So I'm kind of confused about the differentiating claims made about 3.0 then.


I read that as DLSS 3.0 working with TXAA while DLSS 2.0 doesn't.
 

SF Kosmo

Al Jazeera Special Reporter
no exclusives on that List... sees Halo 2 and Shenmue on the list

HonorableHandyLiger-size_restricted.gif
The point is not that exclusives are being excluded deliberately, it's that this is very obviously just a list of games with known budget figures and very incomplete.

It's not that God of War or Last of Us II aren't more expensive than Mass Effect Andromeda or Shenmue, it's that we don't know exactly how much they cost. But what we do know about the size of the team and the length of the development puts it way above either of those games, probably by several times over, in fact.
 
Last edited:

Rikkori

Member
SSDs are going to converge with RAM. The distinction is moot once they're fast enough.

No, they won't. Ever. The difference between SSDs and RAM is of orders of magnitude. There's no such thing as fast enough because there's so much we can do with more ram/vram still, we're many generations away from reaching that sort of impasse. If consoles would still exist then.
 
SSDs are going to converge with RAM. The distinction is moot once they're fast enough.

I've read this a few times on multiple sites now and I'm not sure how/why it has gotten traction. This likely won't be the case, at least not in my lifetime or yours. Eventually architecture for PCs and similar devices might change enough that both volatile and non-volatile storage are replaced with a unified storage medium, but right now the cost of super fast volatile storage like Vram makes that impossible.

They aren't even comparable. SDram and Vram are volatile storage mediums that serve a much different function than non-volatile storage like an SSD that needs to keep things written in memory indefinitely. Non-volatile storage, even a Gen4 NVMe SSD, is comparably slower than ram. You don't want an SSD feeding display data directly to a GPU, especially in a game environment if you have available SDram or Vram to use instead as it would cause massive issues as the GPU waits for the data from the comparatively sluggish SSD.

Faster NVMe SSDs can be used to more quickly pre-load data into system ram (or directly into the Vram in the case of the consoles), but even with every compression technique and hardware trick in the book, the speed is way to slow to be used in place of Vram or even in place of SDram. The role of the fast Gen 4 SSD in the upcoming consoles is to make the single Vram pool better able to do it's job by sending only what it needs at the time thanks to dedicated hardware and software designed to lighten the load on the Vram, not replace the Vram outright. The consoles don't have the benefit of having a second ram pool to draw from and instead rely entirely on a small 16GB pool of GDDR6. In a PC you have SDram that pre-loads data which then fills the Vram much more quickly than any storage medium ever could. I've said so before, but I think the 16GB unified ram pool in the PS5 and XSX are going to be the next-gen bottleneck. They should have gone with at least 24GB, or supplemented the 16GB of Vram with an added 8-16GB SDram pool. I run a PC with 11GB of GDDR6 and 32GB of DDR4, and I can see that not being enough to really push things in a couple years.

SSD speeds will never close the gap in speed either. Each generational leap, PCIe storage standards double SSD potential speeds, but ram speeds increase at a similar rate. As SSDs are going from 4GB/s to 8GB/s, SDram is going from 25.6GB/s+ to 51.2GB/s+ and Vram speeds are closing in on 800GB/s on PC GPUs.

Even Gen 5 SSDs won't come close to the speed of current GDDR6, older GDDR5X or even really old GDDR5.

The Gen4 SSD in the PS5 is going to be 5.5GB/s. Faster SSDs will exist by the time the PS5 launches from Samsung, Sabrent, Gigabyte, Corsair, etc, starting at 6.5 GB/s for the Samsung 980 Pro. Gen 4 SSDs that will be compatible with the PS5 will cap out at 8GB/s.

The Gen 5 SSDs that will likely be in the PS6 and Xbox Series X 2/Version X or whatever they call it will double what Gen 4 SSDs can support, up to 16GB/s.


In comparison:

3200 DDR4 SDram runs at 25.6GB/s. Obviously DDR4 can run much higher than 3200, but 3200 is a common speed. DDR4 is currently at end-of-life and will be replaced with DDR5 right away which will double what DDR4 is capable of.

The Xbox One X used 12 GB of GDDR5 Vram @ 326GB/s

The PS5 will use 16GB of GDDR6 Vram @ 448GB/s

The 1080ti used GDDR5X Vram @ 484GB/s

The Series X will use 6GB of GDDR6 Vram @ 336GB/s and 10GB of GDDR6 @ 560GB/s

An RTX 2080 Ti has 11GB of GDDR6 Vram @ 616GB/s.

The RTX 30 series will use GDDR6+ Vram @ 768GB/s.


So by the time Gen 6 SSDs are capable of 64GB/s, Vram bandwidth will be measured in TB/s.

Non-volatile storage is just a bank of data. Regardless of how high the bandwidth available for an SSD is, it's primary purpose is to store data. New technologies in hardware and software will allow consoles and PCs to better manage what the SSD sends to fill the Vram, but no current data storage device could ever refresh the data required from a GPU for a displayed 4K frame every 16.667 milliseconds in the case of a 60fps game. Volatile storage, or Ram on the other hand only holds data until it loses power where it is shed instantly which is why you will never see ram used as storage. It's also much, much more costly than storage. They simply aren't interchangeable.
 
Last edited:

Shmunter

Member
SSD will never need to be as fast as ram to provide the effect of massive ram. VRAM on a gfx card is a scratchpad for the GPU, it reads and re-reads assets and writes calculated data millions of times over and over to produce a final rendered image, this is why it needs as much speed as possible.

Loading in an asset into ram from storage is a one time operation on request and simply needs to be fast enough to load in within a specific timeframe determined by the dev, e.g. scene gangs, LOD change, etc. the demand is nowhere in the same league as VRAM, nor it needs to be.

Still the faster it is, the more options open up for developers.
 
Last edited:

Elog

Member
Huh? Can you be more clear about this? As opposed to what?

If you need to swap out a significant portion of the data in your VRAM on a PC, that requires a load sequence (such as a load screen) due to the driver overhead in transferring data from RAM to VRAM in comparison to for example the PS5.

Like what would be better looking? Textures? Shaders? Lighting? Ray-tracing? What exactly?

Let's look at the four key areas (there are others but this is a rough breakdown for this discussion) that make up a computer generated picture:

1) Overall rendered resolution - had a huge impact in the last generation in terms of how we perceive what is on the screen. Will have limited impact going forward. Once you hit 1800p the returns are very small in terms of how we perceive the picture with screens up to around 70". A lot of people still talk as if going beyond 1800p will do much expect burning a massive amount of silicon budget for close to nothing.

2) Lighting/ shaders/ RT. Had a significant impact in the last generation and will continue to have so going forward. Once RT is in, I believe the impact will be lessened though. That is the main missing piece.

3) Amount of textures and texture quality. Had a significant impact in the last generation and the possibility to increase both the amount of textures and their resolution to a significant degree will have a huge impact on how we perceive graphics going forward.

4) Physics (foliage movement wind, water, clouds etc). Limited impact due to the lack of development in the last generation. Massive gap between what is possible and what is done. Would have a huge impact on how we perceive graphics if done right. I have hopes here since multi-core CPUs can enable a significant step-up in physics if developers started to use them now that Jaguar is no longer holding things back.

The Avatar movie rendered at 1080p obviously wins over that 4K game across all four categories. What we are talking about here though is 3) since that is what the new I/O solutions can address. When single models have more than 100 high-resolution textures in a CGI movie you realise how big the gap currently is. And this is not due to GPU limitations as such in terms of CU. This boils down to advanced geometry and the ability to load all those textures into the VRAM pool and then swap them out to new textures (due to size) when the camera moves.
 
Last edited:

ZywyPL

Banned
If you need to swap out a significant portion of the data in your VRAM on a PC, that requires a load sequence (such as a load screen) due to the driver overhead in transferring data from RAM to VRAM in comparison to for example the PS5.

Someone posted a benchmark few months ago about the situation you are mentioning, with some Radeon RX 5xxx card with 4GB RAM that was insufficient and causing stutter/framedrops, and swapping the storage drive from HDD to ordinary SATA3 SSD all of a sudden boosted to FPS by 20FPS+ and made the games run buttery smooth, because that's how little is required to eliminate the memory swap hiccups. People forget that even ordinary SATA SSDs have orders of magnitude lower seek times than HDDs, and that's what is the biggest bottleneck in the storage drives.
 

VFXVeteran

Banned
If you need to swap out a significant portion of the data in your VRAM on a PC, that requires a load sequence (such as a load screen) due to the driver overhead in transferring data from RAM to VRAM in comparison to for example the PS5.

Yea, I see what you are saying there but you made it seem like a transfer speed of 5G/s was faster than DDR4 transfer speed.


Let's look at the four key areas (there are others but this is a rough breakdown for this discussion) that make up a computer generated picture:

1) Overall rendered resolution - had a huge impact in the last generation in terms of how we perceive what is on the screen. Will have limited impact going forward. Once you hit 1800p the returns are very small in terms of how we perceive the picture with screens up to around 70". A lot of people still talk as if going beyond 1800p will do much expect burning a massive amount of silicon budget for close to nothing.

That's not a good enough excuse and you never mention how an SSD allows higher resolution in games. In fact, the SSD has 0 to do with the rendering equation. Look at ray-tracing for example. It is completely affected by how many rays you trace into the scene. Also, there are so many render buffers that do operations in 2D space that already has to be in VRAM.

2) Lighting/ shaders/ RT. Had a significant impact in the last generation and will continue to have so going forward. Once RT is in, I believe the impact will be lessened though. That is the main missing piece.

Again I see no correlation between lighting and shaders to SSD speed.

3) Amount of textures and texture quality. Had a significant impact in the last generation and the possibility to increase both the amount of textures and their resolution to a significant degree will have a huge impact on how we perceive graphics going forward.

Again. Filtering a texture using a filtering algorithm to search the texture space looking for texels on screen (i.e. anisotropic filtering) is a rendering problem. Not a SSD->VRAM problem.

4) Physics (foliage movement wind, water, clouds etc). Limited impact due to the lack of development in the last generation. Massive gap between what is possible and what is done. Would have a huge impact on how we perceive graphics if done right. I have hopes here since multi-core CPUs can enable a significant step-up in physics if developers started to use them now that Jaguar is no longer holding things back.

Physics has always been a big burden on CPUs, I don't see that going away anytime soon. They just aren't strong enough along with everything else in the subsystem. I would have a strong educated guess that many physics algorithms are too hard for today in realtime if not isolated to a given scenario. There are too many integrals that need to evaluated. Fluid computations is an example and the reason that water surfaces look pretty ridiculus. Using modified sin() cos() functions can only take us so far.

The Avatar movie rendered at 1080p obviously wins over that 4K game across all four categories. What we are talking about here though is 3) since that is what the new I/O solutions can address. When single models have more than 100 high-resolution textures in a CGI movie you realize how big the gap currently is. And this is not due to GPU limitations as such in terms of CU. This boils down to advanced geometry and the ability to load all those textures into the VRAM pool and then swap them out to new textures (due to size) when the camera moves.

Advanced geometry? You mean more triangles. Yes. It has always been and always will be the lighting/shading part of the equation that is so far behind in realtime.

All of these things play a much bigger role in the sheer memory and power of the GPU/CPU and not the speed of the SSD. If I/O no longer becomes the bottleneck, the GPU/CPU/VRAM will. The SSD is not a catch-all solution.
 
Last edited:

Amiga

Member
There are some next-gen only games, and still none of the I/O advancements are seen in action. There aren't even any unique textures as seen inRage's Megatextures, it's all yet again copy-paste of the same textures. What you are missing in the whole equation is the stuff BEFORE the SSD - the Bluray Discs - they can contain only so much data, and bare in mind the discs don't need to duplicate the data, and we already see games reaching 100-150GB, so logically, next-gen games simply won't be able to store that much better or more unique textures, because the disc size didn't go up, it's still the same 100GB. Sure there will be more impact on the compression but that will just bump the textures quality to what we have been seeing on PCs in the past decade or so, instead of Medium-High consoles will be able to catch up with High-Ultra, as already proven in all the revealed games so far, where everything is so damn sharp and detailed (finally!). But like I said, all the I/O and SSD theories sound nice in theory, but in practice, where is the proof, where are the examples? Because we already saw something between 50-100 games that will be released in the next 2-3 years. If anything, RT is where the change is clear, you can see its impact from the very first second of a trailer/gameplay, this is what makes the next-gen games look next-gen.
A lot of the impact is on developer productivity. devs will need spend less time on code and more on content creation. less need for complex code to prefect streaming assets. and when a game is less complex to put together it can be easier to patch up. also, smaller indy devs can now produce higher quality games.
 

NinjaBoiX

Member
A 2080 ti costs like a 1000-1200 bucks. These consoles will be half that.
Yup.

Cost and practicality are always the first things that pop into my head when I see a PC brag post, along with technical knowledge and a head for tinkering being almost a prerequisite.

“lol, just splash out £2000 on this huge ugly box and reorganise you’re house so you can fit in a computer desk if you want to play games at 4k/120, lol.”
 

Elog

Member
Advanced geometry? You mean more triangles. Yes. It has always been and always will be the lighting/shading part of the equation that is so far behind in realtime.

All of these things play a much bigger role in the sheer memory and power of the GPU/CPU and not the speed of the SSD. If I/O no longer becomes the bottleneck, the GPU/CPU/VRAM will. The SSD is not a catch-all solution.

The point of my list was not to say that I/O is a catch-all. My point is that pushing TFLOPS to go beyond 1800p/60FPS has limited value for the majority of gamers. The highest impact on graphics in terms of bang-for-the-buck will come from three areas and I/O is key for one of them:

- More textures and higher texture resolution - I/O is key to achieve this - together with more advanced geometry
- Ray Tracing
- Much more advanced physics

I tried to summarise that at the end of my post, i.e. the link between my 3rd point and I/O - not every point on the list.
 
Last edited:
Yup.

Cost and practicality are always the first things that pop into my head when I see a PC brag post, along with technical knowledge and a head for tinkering being almost a prerequisite.

“lol, just splash out £2000 on this huge ugly box and reorganise you’re house so you can fit in a computer desk if you want to play games at 4k/120, lol.”
It actually was closer to 4000 EUR. And yes I connected it to a TV, shocking I know.
 

daninthemix

Member
I am glad pc gamers have finally got here but ps4 pro owners have been playing 4kcb games for almost 4 years now. Dlss2.0 is inarguably better but checkerboarding and all the other techniques ps4 pro games have been using has produced some stunning results.

Native 4k is a waste of resources. I'm glad pc finally has something similar.
That's right son, live in your bubble of denial.
 

ZywyPL

Banned
Yup.

Cost and practicality are always the first things that pop into my head when I see a PC brag post, along with technical knowledge and a head for tinkering being almost a prerequisite.

“lol, just splash out £2000 on this huge ugly box and reorganise you’re house so you can fit in a computer desk if you want to play games at 4k/120, lol.”

SFF (small form factor) PCs are gaining huge popularity as of recent years, while at the same time the consoles will be bigger than ever, much bigger, you can easily fit a top-end components into a sub-10L case, same as PS5/XBX.


Is NVIDIA giving away free case+cpu+mobo+ram+ssd+OS+case+PSu+controller free with a 3070 ? Because I will pre-order a 3070 before a PS5-XSX

JampackedYellowishGalago-max-1mb.gif


You cannot have a Ferrari in a price of Fiat Panda, sorry.
 
Last edited:

Jaxcellent

Member
The solution on the PS4 Pro looks fine to me, currently playing GoT, you wouldn't believe how amazing this looks on my pseudo 4K projector. my jaw is on the floor almost every 15 minutes.

To me, the results were only kinda visable on the 800% zoom I'll take 1440p checkerboarded to 4K/60 frames a sec

Edit: typo, I hate my phone
 
Last edited:

fybyfyby

Member
The hype for the upcoming consoles has focused primarily on their new I/O infrastructures, especially when it comes to the PS5 (as attested by the million or so GAF threads on the subject). The Series X looks like being no slouch in this area either, with it's own (much less talked about) solution, Velocity Architecture. Other types of "secret sauce" are often alluded to, but rarely actually explained in detail.

Who knew that all along the chefs at Nvidia were busy in the kitchen working on a delicious concoction of their own. I'm talking about DLSS 2.0, of course. While PCs are often characterised as big lumbering hulks, having to use raw power (and PC users willingness to spend copious amounts of money on said power) to drive past the level of performance seen on consoles, this time around it seems that the PC is the one taking the more nimble and efficient approach.

I'm not usually one to buy into the hype, but the results of DLSS 2.0 are already plain to see. What's more, those results are only on the current line of Nvidia GPUs, we can almost certainly expect an even more impressive performance when the next gen Nvidia GPUs drop (probably a little earlier than the new consoles). I suppose AMD could have something up their sleeves regarding machine learning (it would be strange if they had ignored such a hot field completely), but if any of this tech is making its way into the next gen consoles, then both them and Sony/MS are keeping really quiet about it. One reason for concern is that DSLL 2.0 seems partially dependent on hardware (i.e. the tensor cores), which the consoles appear to lack.

Speaking of Nvidia and consoles, I wonder what they could potentially offer Nintendo for a theoretical Switch 2 in 2021/22? Maybe a 2 terraflop next gen Tegra GPU loaded with DSLL 3.0 tech could significantly close the gap with the much more powerful home consoles?

Anyway, the proof of any good sauce is in the tasting and I can't wait for the next-gen consoles and GPUs to be released later this year so that we can finally know for sure.
Dlss like algorithms are very good and Im curious, how AMD will react to it. Otherwise it hasnt a chance against NVIDIA.
Sony already patented something like dlss. So maybe there will be surprise with ps5.
 

Mr.ODST

Member
DLSS type solutions need to be used in next gen consoles especially if they want to use and wow with raytracing, extremely impressed with Minecraft etc using DLSS 2.0 but I only play in 1080p

Thinking of upgrading to a 2080 PC
 

VFXVeteran

Banned
The point of my list was not to say that I/O is a catch-all. My point is that pushing TFLOPS to go beyond 1800p/60FPS has limited value for the majority of gamers. The highest impact on graphics in terms of bang-for-the-buck will come from three areas and I/O is key for one of them:

- More textures and higher texture resolution - I/O is key to achieve this - together with more advanced geometry
- Ray Tracing
- Much more advanced physics

The only aspect that I would somewhat agree with is more textures and their resolution. But none of the other points you mentioned have anything to do with I/O being a key to increasing graphics fidelity.

Consider the following graphics features:

1) Anisotropic filtering
2) Shadow quality
3) Shadow Distance
4) Lighting quality (and this has much more to do with ray-tracing)
5) Godrays quality
6) Depth of Field
7) Ambient Occlusion (more ray-tracing)
8) Material shaders
9) Reflections (more ray-tracing)
10) Motion blur
11) Lens flare
12) Rain occlusion
13) Water materials
14) Hair materials
15) Procedural geometry
16) Volume smoke, clouds
17) FX (fire, explosions, etc..)
18) Animation

None of those have anything to do with relying on the SSD.
 

Elog

Member
The only aspect that I would somewhat agree with is more textures and their resolution. But none of the other points you mentioned have anything to do with I/O being a key to increasing graphics fidelity.

Consider the following graphics features:

...

None of those have anything to do with relying on the SSD.

I believe you did not read what I wrote - I literary wrote the same thing as your so-called counter argument, i.e. I/O is important for textures and texture resolution and NOTHING else. However, that is one of the big areas to improve and the new consoles in general and the PS5 is particular are doing that.
 

VFXVeteran

Banned
I believe you did not read what I wrote - I literary wrote the same thing as your so-called counter argument, i.e. I/O is important for textures and texture resolution and NOTHING else. However, that is one of the big areas to improve and the new consoles in general and the PS5 is particular are doing that.

I'm sorry.. my mistake. I did not read the end of your first point and thought you meant that I/O was key in all of your points. My bad.
 
Last edited:

Redlight

Member
I game exclusively on consoles but there is no doubt that PC's will always have the upper hand with the latest technologies and developments. Consoles are a tradeoff of the best available hardware (for a set cost to the consumer) at the time of launch.

Even if a console was more powerful than the best PC at launch, PC gamers won't have to wait long for something even better.
 
After all the hoo haa I'm still not sure which resolution I should select in the Death Stranding options, I have a 2080ti and I'm playing on a 4K tv. I'm apparently super dumb!
 
No well-funded studio is making games for something as powerful as a 2080Ti. By the time they are, the 2080Ti will be obsolete. It is already a card due to be handily replaced at the top.

When PS4 launched it was considered a pretty mid-tier gaming PC at best. Good gaming PC hardware was already a fair step ahead of the game. But where were the PC only games at that time that looked as good as The Last of Us Part II? Or Ghost of Tsushima? Where are they now?

Horsepower can unquestionably get you frame-rate, as well as diminishing returns in image quality, but it is well-funded and talented artists and programmers that deliver what is considered good graphics, as well as raising the bar a "generation".
You can run Unreal Tournament at 8K 120 FPS and it still looks like a 90's game, despite immaculate image quality and super responsive and smooth frame-rate.

Like it or not, PC gaming and console gaming are tied together. The kind of graphics you can expect on PC will take a large step up because of the next generation of consoles. Fast IO will become a minimum requirement, rather than just an underutilised load-speed-booster, which is largely the only thing SSD technology has delivered on PC so far, despite being available for years. Nobody really targeted it and relied on the speed it could deliver per game loop tick.

The more expensive, more "numerously-transistored" hardware will always be more capable, but without well-funded and talented studios targeting it, it will always be underutilised, and won't ever be the reason games and graphics take a giant leap forward. They just give you today's equivalent of Unreal Tournament with shit loads of pixels per-frame and per unit of time.

Personally I'm OK with that and consider myself a PC gamer first and foremost, but it's arrogant and ignorant to think PC is driving anything forward. It hasn't done for a long time now.
 
Last edited:

Kazza

Member
I wasn't sure that this was worth a new thread, so thought it best to post it here (although it only says "ray tracing", the article says these all support DLSS too):

Ray tracing games you can play right now:
  • Fortnite
  • Minecraft
  • Battlefield V
  • Call of Duty: Modern Warfare (2019)
  • Control
  • Deliver Us The Moon
  • Mechwarrior V: Mercenaries
  • Metro Exodus
  • Quake II RTX
  • Shadow of the Tomb Raider
  • Stay in the Light
  • Wolfenstein: Youngblood
  • Amid Evil
  • Bright Memory

Ray tracing games on the way:
  • Cyberpunk 2077
  • The Witcher III
  • Crysis Remastered
  • Call Of Duty: Black Ops Cold War
  • World Of Warcraft: Shadowlands
  • Observer: System Redux
  • Dying Light 2
  • Atomic Heart
  • Doom Eternal
  • Vampire: The Masquerade – Bloodlines 2
  • Watch Dogs: Legion
  • Enlisted
  • Justice
  • JX3
  • Synced: Off-Planet

Some big hitters there (especially good for what will no doubt be demanding games such as Cyberpunk), and it's also good to see some older titles get a DLSS update as well. Many big PC publishers are still missing though (Sega, Capcom, Ubisoft etc). With the hype around the RTX 3000 cards I'm confident that the other developers will get on board sooner rather than later.

 
Last edited:

DJT123

Member
Yup.

Cost and practicality are always the first things that pop into my head when I see a PC brag post, along with technical knowledge and a head for tinkering being almost a prerequisite.

“lol, just splash out £2000 on this huge ugly box and reorganise you’re house so you can fit in a computer desk if you want to play games at 4k/120, lol.”
If you have the time to post on a gaming forum, you presumably should have the time & money to organize yourself a serious gaming setup, no?
 

betrayal

Banned
It should be clear to everyone that a PC is always technically far superior when compared to a console. But all that horsepower is useless if you can't get it out on the road effectively.
Art design, choice of colors and smart effects and other technical possibilities that are not always related to pure technical power are more important to produce something visually impressive.
 

theclaw135

Banned
Pretty much. Power isn't jack without creative vision and the resources to bring it to life. No one in the last decade has produced a game rigorously targeting top of the line PCs.
 
Last edited:

DJT123

Member
It should be clear to everyone that a PC is always technically far superior when compared to a console. But all that horsepower is useless if you can't get it out on the road effectively.
Art design, choice of colors and smart effects and other technical possibilities that are not always related to pure technical power are more important to produce something visually impressive.
This is a weird statement single platform (console) gamers use alot. There are many ways to utilize "horsepower", primarily in making sure games get out on the road effectively, with high (not merely "playable") frame rates. That's before utilizing "HP" to super sample, max out settings, or exhaustively mod an iconic game like Skyrim or Witcher 3 into a different, much better game entirely.
 
The hype for the upcoming consoles has focused primarily on their new I/O infrastructures, especially when it comes to the PS5 (as attested by the million or so GAF threads on the subject). The Series X looks like being no slouch in this area either, with it's own (much less talked about) solution, Velocity Architecture. Other types of "secret sauce" are often alluded to, but rarely actually explained in detail.

Who knew that all along the chefs at Nvidia were busy in the kitchen working on a delicious concoction of their own. I'm talking about DLSS 2.0, of course. While PCs are often characterised as big lumbering hulks, having to use raw power (and PC users willingness to spend copious amounts of money on said power) to drive past the level of performance seen on consoles, this time around it seems that the PC is the one taking the more nimble and efficient approach.

I'm not usually one to buy into the hype, but the results of DLSS 2.0 are already plain to see. What's more, those results are only on the current line of Nvidia GPUs, we can almost certainly expect an even more impressive performance when the next gen Nvidia GPUs drop (probably a little earlier than the new consoles). I suppose AMD could have something up their sleeves regarding machine learning (it would be strange if they had ignored such a hot field completely), but if any of this tech is making its way into the next gen consoles, then both them and Sony/MS are keeping really quiet about it. One reason for concern is that DSLL 2.0 seems partially dependent on hardware (i.e. the tensor cores), which the consoles appear to lack.

Speaking of Nvidia and consoles, I wonder what they could potentially offer Nintendo for a theoretical Switch 2 in 2021/22? Maybe a 2 terraflop next gen Tegra GPU loaded with DSLL 3.0 tech could significantly close the gap with the much more powerful home consoles?

Anyway, the proof of any good sauce is in the tasting and I can't wait for the next-gen consoles and GPUs to be released later this year so that we can finally know for sure.
I have no idea what you are talking about. PC gamers had been looking forward to next gen consoles not to raise the maximum spec of PCs, but to raise the minimum specs.

PCs that run games had been slow because we haven't had a reason to upgrade. And the hope was that next gen we would finally have SSDs being the bare minimum.

But then Xbox throw a wrench in the works with Series S. And now we are stuck with a weak minimum spec again.

Right now, if you play on PC, just about the only reason you would need a beefy gaming computer is for Virtual Reality. The minimum spec caused by consoles is just too low. And now Xbox Series S made sure that it stays that way for another 7 years.
 

betrayal

Banned
This is a weird statement single platform (console) gamers use alot. There are many ways to utilize "horsepower", primarily in making sure games get out on the road effectively, with high (not merely "playable") frame rates. That's before utilizing "HP" to super sample, max out settings, or exhaustively mod an iconic game like Skyrim or Witcher 3 into a different, much better game entirely.

I currently play primarily on the PC, but I also own the consoles. I am neither for one side nor the other. I just want good games, nothing more.

Anyway, I think your argumentation is flawed. Skyrim and Witcher 3 (and all the other games), for example, are not really more fun to play with than without mods and they don't even look as good as things you know from other games. You might have a bit more enthusiasm in the beginning, because of the visuals, but that usually fades away quite fast. So you really don't get "a different, much better game entirely.".

What you are right about is the framerate, but I was talking primarily about the visuals.
 

DJT123

Member
I currently play primarily on the PC, but I also own the consoles. I am neither for one side nor the other. I just want good games, nothing more.

Anyway, I think your argumentation is flawed. Skyrim and Witcher 3 (and all the other games), for example, are not really more fun to play with than without mods and they don't even look as good as things you know from other games. You might have a bit more enthusiasm in the beginning, because of the visuals, but that usually fades away quite fast. So you really don't get "a different, much better game entirely.".

What you are right about is the framerate, but I was talking primarily about the visuals.
Maybe you aren't a tinkerer or don't want to delve into that part of PC Gaming (modding a game like Skyrim from scratch can be strenous chore, for example) but I insist the end result is worth it. Just take an easy to install texture mod like https://www.nexusmods.com/witcher3/mods/1021/ and it makes a measurable difference to an otherwise dated looking game released 6 years ago. (I'm setting a high bar here- that modder is phenomenal & his art is true to the games vanilla aesthetics.)
 

NinjaBoiX

Member
If you have the time to post on a gaming forum, you presumably should have the time & money to organize yourself a serious gaming setup, no?
has any device at all capable of accessing the internet in 2020 + at least a handful of spare minutes a day = has the time, space and technical knowledge to own and optimise a high end gaming PC

Yup, that’s some ironclad logic to be sure.
 
Last edited:

DJT123

Member
has any device at all capable of accessing the internet in 2020 + at least a handful of spare minutes a day = has the time, space and technical knowledge to own and optimise a high end gaming PC

Yup, that’s some ironclad logic to be sure.

You're bending your brain trying to classify a "PC" as some sort of exotic system vastly different from a console. You save more time with a PC (how long have we been loading games off our SSD's? for example), and don't need much technical knowledge at all. It's just slightly more inconvenient to get started. (OK, I can see the Space/Living Room aspect argument)
 
Last edited:
Top Bottom