• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

pawel86ck

Banned
For all the coding to the metal discussion. I am developer but not a game dev.(Though I have wrote some raytracing and wrote my own 3d engine 19 years ago that ran quake 3 levels). However one of the most innovative, smartest, a true game dev expert amongst the top .1 percent and gifted 3d engine dev ever, John Carmack said this about consoles vs PC's :

In PS3 era I couldn't believe GTA 5 run on consoles (and I believe RSX was even slower than 7800GTX on PC). PC port run extremely bad at minimum settings and 720p even on 8800 GTX, and that was 2x faster GPU compared to 7800 GTX. So back then I would believe Carmack opinion.

However maybe on current gen consoles Carmack words are no longer relevant, because when I had GTX 680 I could play many PS4 ports with 2x better framerate, and that would be not the case if PS4 GPU would be 2x more capable compared to PC 7850/7870 equivalent. Of course there are amazing looking games on PS4 like Uncharted 4 (IMO it's still the best looking game I have even seen), or spiderman, but we dont know what it would take to run them on PC? Maybe Uncharted 4 would run on 7870 after all if only ND would want to port it?
 
Last edited:

LordOfChaos

Member
Is the first Navi Macbook?

Yep. Base models used Polaris and top BTO models used vega before.

Apple is getting better at using chips that just came out too, not much else has shipped Navi on mobile, and only as far back as October.
 
Last edited:

LordOfChaos

Member
In PS3 era I couldn't believe GTA 5 run on consoles (and I believe RSX was even slower than 7800GTX on PC). PC port run extremely bad at minimum settings and 720p even on 8800 GTX, and that was 2x faster GPU compared to 7800 GTX. So back then I would believe Carmack opinion.

However maybe on current gen consoles Carmack words are no longer relevant, because when I had GTX 680 I could play many PS4 ports with 2x better framerate, and that would be not the case if PS4 GPU would be 2x more capable compared to PC 7850/7870 equivalent. Of course there are amazing looking games on PS4 like Uncharted 4 (IMO it's still the best looking game I have even seen), or spiderman, but we dont know what it would take to run them on PC? Maybe Uncharted 4 would run on 7870 after all if only ND would want to port it?


His comment was from 2014, DX12 was released in July 29, 2015, Vulkan is February 16, 2016. Yes, Carmack is smarter than all of us, but people really need to stop taking what he in 2014 before low level APIs were available on PC as gospel for 2019.
 

Fake

Member
Yep. Base models used Polaris and top BTO models used vega before.

Apple is getting better at using chips that just came out too, not much else has shipped Navi on mobile, and only as far back as October.
Damn... Even the lastest Surface are using Vega.
 

LordOfChaos

Member
Damn... Even the lastest Surface are using Vega.

I really want to like Surfii, their exterior hardware design is up there with the best if not the best, but they make these weird decisions. Like launching the Studio right ahead of a new Nvidia GPU release each time, leaving it outdated for the whole time, only to repeat the cycle. Like the Vega APU in the just released 15" that doesn't have any more TDP or battery size than the Intel 13 that makes it perform worse than that in a lot of cases. Like only adopting USB C in 2019 and still using one of three precious ports for Surface Connect with an outdated expensive dock accessory.


Er, anyways. Not to go too OT.
 
For all the coding to the metal discussion. I am developer but not a game dev.(Though I have wrote some raytracing and wrote my own 3d engine 19 years ago that ran quake 3 levels). However one of the most innovative, smartest, a true game dev expert amongst the top .1 percent and gifted 3d engine dev ever, John Carmack said this about consoles vs PC's :


What Carmack is saying is fundamentally true, but I don't think people understand the context in terms of modern-day consoles and consumer electronics. It's not like in the past where the architectures themselves had intrinsic differences in how they did various functions and calculations, algorithms etc. where coding to the metal was necessary to get the most out of the systems. And consoles in the older days, having MUCH simpler OS (or in some cases none at all) meant there was no OS overhead or OS acting as an abstraction layer for the games to run on, which freed up all system resources for the game itself during runtime.

Nowadays, virtually all consumer products are running on either x86 (or x86-64) or ARM processors, They all use either DDR-based or GDDR-based RAM. They all either use that ram in a unified or split memory pool. They all feature support for almost all the same connectivity standards such as USB, PCIe, etc. They all feature support for almost all the same industry-wide interfaces such as SATA, UHS, NVMe, etc.

Featuring such similar architectures and feature support, modern electronics these days generally see their performance differences based on inclusion of things such as amount of RAM, memory bus bandwidth sizes, generation of processors used, what GPU they use, amount of storage, specific connectivity standards they support, features of the OS they run etc. But none of that involves 'coding to the metal'. Even in regards to PS4 and XBO, developers cannot code to the metal on those systems because if they happen to exploit very unusual/niche architecture features that successor consoles don't have via hardware, then that game will not be backwards-compatible unless significant re-coding is done in the game. Depending on the severity the game used those architectural features as exploits originally, the entire game may have to be re-coded from ground zero to run on that newer system.

Sony and MS want to ensure backward-compatibility as standard, so while they may have APIs that cut down on the abstraction levels some, they are not letting any developer (even their in-house ones) code "to the metal" the way we saw devs doing with even PS3, let alone systems like PS2, N64, Saturn, SNES etc. Those days are over for mass-market consoles; they've BEEN over for several years, in fact.

We can tell that, as well, simply looking at the relative visual gains in, say, PS4 exclusives going from the start of the gen to where we are now. Yes, there have been gradual improvements on-average, and most PS4 exclusives are visually arresting like few others, but...the gap between, say, Killzone:ShadowFall and Death Stranding is noticeably smaller than Uncharter to Last of Us was for the PS3. And it has to do with more than just diminishing returns: coding "to the metal" for AAA games of these size is just unrealistic in this day and age.

You may get very specific or simpler functions and features coded at a lower level that's as analogous to pure assembly coding as possible, but the majority of the game is not going to be coded that way. It's simply infeasible from a time and budget perspective. The visual gains we've seen this gen have mainly been thanks to re-use of textures and assets created at the start of the gen, freeing up dev time from needing to generate completely new ones from scratch (not saying there are NO new assets being created completely from scratch, but it's much less than it was at the start of the generation, hence why cross-gen was being pushed so hard early on). The visual increases are also coming mainly thanks to increases in budget; that means more artists, more animators, more programmers etc able to either create more assets and/or polish created assets even more within a development time frame in-line with what's already there.

I feel Carmack 100% understands this context, but it's not something you can literally put in a tweet while keeping it simple. Unfortunately I feel a lot of others are still under the impression that differences in big console AAA games and games on other platforms is down to "secret sauce" and "coding to the metal" when those days ended in 2013. You want to know why there are few PC games with visuals comparable to God of War IV, GT Sport etc.? Because there isn't a large enough install base of high-performance GPU cards on PCs to justify the budgets and manpower necessary to create those level and quantity of assets. That's the only major difference between console and PC gaming at the AAA level these days, besides perhaps consoles having unified memory (keep in mind there are still some advantages to DDR/GDDR split pools too, and when memory amount is high for both it mostly cuts off the unified memory advantage consoles have).
 
This is relevant to the PC vs consoles efficiency debate, especially when it comes to AMD hardware:



He's an id Tech 6 developer (for people who don't know who he is).

Consoles do not use AMD's drivers/shader compiler, they use a custom software stack written by Sony/MS themselves. Games can even utilize hand-written assembly (close to metal programming).

"AMD GCN flops are less efficient than nVidia flops" they said. Maybe on PC, but not on consoles. ;)
 
This is relevant to the PC vs consoles efficiency debate, especially when it comes to AMD hardware:



He's an id Tech 6 developer (for people who don't know who he is).

Consoles do not use AMD's drivers/shader compiler, they use a custom software stack written by Sony/MS themselves. Games can even utilize hand-written assembly (close to metal programming).

"AMD GCN flops are less efficient than nVidia flops" they said. Maybe on PC, but not on consoles. ;)


To be fair, AMD's always had pretty shit drivers for their GPUs on PC, whereas Nvidia's always generally had superior ones. Or at the very least, AMD has generally had poor support for their drivers and APIs on PCs due to the dominance of Nvidia on that front, traditionally, in terms of sales and (therefore) "pot-sweetening" deals with devs to favor their APIs and drivers (some vids from AdoredTV on Youtube go into that for those interested).

So it's no surprise Sony and MS write their own software stacks. But even for as low access as Sony and MS provide for game developers, it is not exactly 1:1 to bare-metal assembly of older systems. The presence of the OS alone is an abstraction layer that older consoles didn't really have to deal with. It's why those systems do almost virtually nothing when there isn't a game running on them.

I guess my hang-up on just comes down to people not knowing the full monty on what generally drives the divide between top-level console AAA and PC AAA games these days. Console devs can program towards a singular, fixed target with install bases of millions. That gives console devs justification for budgets that can push that fixed hardware to its fullest. This doesn't exist with PCs at the AAA level because the percentage of PC gamers with the hardware builds able to run a comparable "PC AAA exclusive", if one were to be build, is probably not even 5% of the PS4's install base.

What publisher is going to justify a $100 million budget for a AAA game specifically targeting high-end PC builds that total to around 5 million users...if that? Not very many, honestly. PC devs don't get to enjoy the privilege of working on a fixed target with truly mass-market levels of install base saturation that Sony, Nintendo and MS 1st-party do.

Given that low-level APIs have become more prevalent on PC the past few years, I can only imagine that a PC AAA game built around a 2080 TI with 16GB GDDR6, 32GB DDR5, Gigabyte Aorus NVMe SSD and an i9 octa-core on a slim Windows 10 OS and Nvidia's low-level APIs, would be at LEAST comparable to the best 1st-party AAA stuff PS5 or Scarlet could put out, if not better...assuming you had a dev with comparable talent and publisher with comparable budget to program specifically around that spec.
 
Last edited:

Fake

Member
To be fair, AMD's always had pretty shit drivers for their GPUs on PC, whereas Nvidia's always generally had superior ones.
I highly disagree. When I used to have a desktop in my house (9000x series) Nvidia has the worst drivers ever at the point of damage my GPU without even make a OC.
Not saying AMD don't have their weakness, but their drivers IMO are far most stable.
 
But even for as low access as Sony and MS provide for game developers, it is not exactly 1:1 to bare-metal assembly of older systems. The presence of the OS alone is an abstraction layer that older consoles didn't really have to deal with. It's why those systems do almost virtually nothing when there isn't a game running on them.
Fair enough. Even the previous generation of consoles had an OS and virtualization (i.e. PS3/GameOS). People need their messages, party chat etc. We're not going back to the SNES/Genesis era.

You could also argue that modern CISC/x86 CPUs have an extra layer of abstraction (microcode, CISC macro-ops to RISC micro-ops translation, GPRs vs internal registers, register renaming) vs old school RISC CPUs giving you raw access to everything. It is what it is. :)
 
I highly disagree. When I used to have a desktop in my house (9000x series) Nvidia has the worst drivers ever at the point of damage my GPU without even make a OC.
Not saying AMD don't have their weakness, but their drivers IMO are far most stable.

Hm, that's interesting. What games do you play on that setup? I was speaking from a generalization POV, but even then it was probably better to word it as AMD's stuff having worst support from developers, who might tend to optimize for games for Nvidia's APIs and software stack.

It's very likely not the issue of AMD's actual APIs or drivers being fundamentally poor, just that devs aren't coding to them (by and large) as much as Nvidia's. It doesn't help in AMD's case that Nvidia has magnitudes more budget and resources to play with when it comes to GPU stuff; it's highly debatable if Navi would be where it is right now without the Sony/Microsoft console partnerships supplementing huge chunks of the costs (and being guaranteed clients purchasing at the scale of millions)

Fair enough. Even the previous generation of consoles had an OS and virtualization (i.e. PS3/GameOS). People need their messages, party chat etc. We're not going back to the SNES/Genesis era.

You could also argue that modern CISC/x86 CPUs have an extra layer of abstraction (microcode, CISC macro-ops to RISC micro-ops translation, GPRs vs internal registers, register renaming) vs old school RISC CPUs giving you raw access to everything. It is what it is. :)

Yeah, I can agree with that. Even 6th-gen, I would say, had OS abstraction and virtualization. OG Xbox is a clear-cut example of that for example. It's just that the footprint for those things was much smaller vs. where it is now, because those systems weren't pushing things like streaming apps with 4K playback, or even half of the features PS4 and XBO have today on the UI QoL side of things.

I will admit I'm a bit peeved those full-on to-the-metal days are over for consoles, but it's out of necessity. Maybe we'll see a growth market in the future for lower-scale systems appealing to retro-enthusiasts tastes, providing hobbyists with that type of access on systems aimed at only playing games.

It wouldn't be anything meant for mainstream consumption, but it'd be pretty cool to see. I've seen people on spots like AtariAge mentioning about building their own FPGA-based 2D consoles, which is interesting to see. At this moment tho it's a field seemingly with limited appeal to a small number of retro enthusiasts. That's as close as we'll see any gaming consoles embracing full to-the-metal coding again (and they'll be magnitudes simpler than anything coming from the Big 3).
 
Last edited:

VFXVeteran

Banned
To be fair, AMD's always had pretty shit drivers for their GPUs on PC, whereas Nvidia's always generally had superior ones. Or at the very least, AMD has generally had poor support for their drivers and APIs on PCs due to the dominance of Nvidia on that front, traditionally, in terms of sales and (therefore) "pot-sweetening" deals with devs to favor their APIs and drivers (some vids from AdoredTV on Youtube go into that for those interested).

So it's no surprise Sony and MS write their own software stacks. But even for as low access as Sony and MS provide for game developers, it is not exactly 1:1 to bare-metal assembly of older systems. The presence of the OS alone is an abstraction layer that older consoles didn't really have to deal with. It's why those systems do almost virtually nothing when there isn't a game running on them.

I guess my hang-up on just comes down to people not knowing the full monty on what generally drives the divide between top-level console AAA and PC AAA games these days. Console devs can program towards a singular, fixed target with install bases of millions. That gives console devs justification for budgets that can push that fixed hardware to its fullest. This doesn't exist with PCs at the AAA level because the percentage of PC gamers with the hardware builds able to run a comparable "PC AAA exclusive", if one were to be build, is probably not even 5% of the PS4's install base.

What publisher is going to justify a $100 million budget for a AAA game specifically targeting high-end PC builds that total to around 5 million users...if that? Not very many, honestly. PC devs don't get to enjoy the privilege of working on a fixed target with truly mass-market levels of install base saturation that Sony, Nintendo and MS 1st-party do.

Given that low-level APIs have become more prevalent on PC the past few years, I can only imagine that a PC AAA game built around a 2080 TI with 16GB GDDR6, 32GB DDR5, Gigabyte Aorus NVMe SSD and an i9 octa-core on a slim Windows 10 OS and Nvidia's low-level APIs, would be at LEAST comparable to the best 1st-party AAA stuff PS5 or Scarlet could put out, if not better...assuming you had a dev with comparable talent and publisher with comparable budget to program specifically around that spec.

Despite the inefficiencies in added layers of abstraction due to the OS and other elements for performance, I seriously doubt PS5/Scarlet is getting anywhere near the performance levels of a 2080Ti. This current generation proves that and next gen will prove an even less proprietary optimization path due to the hardware requirements (mainly backward compatibility).

We will eventually be able to put this to the test next year.

My hang-up is assuming a 1st party game's graphics and presentation artwork is a direct correlation to the hardware used to make such graphics. This is never ever the case. It's especially problematic when screenshots are used to compare in technical discussions instead of what we see missing or especially important (i.e. baked AO vs. dynamic RTX AO). This is one of the reasons DF does these kinds of technical comparisons to eliminate the artistic opinions and look at what the hardware is really capable of.
 
Last edited:

VFXVeteran

Banned
Also to add, today's games are shader bound. They are nearly as complex as shaders made in the film industry. I can't imagine converting the GLSL shader code to assembly and beating the hardware vendor's compiler performance for anything greater than 1%. It just wouldn't be worth the time or effort.

I am hoping that the remaining 1st party developers will have the chance to make higher level graphics engines that can be portable across platforms (i.e. Unreal4, CryEngine, Frostbite, etc..). It's really the only way to focus more on asset creation (which they are VERY VERY good at) and less time on reinventing the wheel on some special hardware.
 

llien

Member
Coding to a fixed spec has advantages, especially when developers have the time to get used to the hardware. Hell, see some C64 demos done today - they're absolutely mindblowing. Just that you using Carmack as a reference, in this day and age, is quite yawn-inducing.
What has changed in game dev world to make his statement irrelevant?
 

VFXVeteran

Banned
What has changed in game dev world to make his statement irrelevant?
The release of low-level APIs like Vulkan and DX12 (low-level "ish") added with the fact that consoles have PC hardware components.

My question is what is the difference in performance gain between a 1st party game like Uncharted 4 and a 3rd party game like Tomb Raider if they are both compiled for the same platform? I don't think the advantage is nearly as much as people are implying.
 
Last edited:

llien

Member
The release of low-level APIs like Vulkan and DX12 (low-level "ish") added with the fact that consoles have PC hardware components.
Where is the major performance boost from it, pretty please, have I missed it in PC gaming world?
I assume the theory goes that before one could not, but now can and of course would "code to the metal".

My question is what is the difference in performance gain between a 1st party game like Uncharted 4 and a 3rd party game like Tomb Raider if they are both compiled for the same platform?
So, Witcher on Switch like PC hardware pretty please, can I see one?

It is mostly about the assets. It's a slider for PC hardware. It is a specifically targeted optimizations (not code, there is only so much one can optimize in code) for consoles.
 
Last edited:
So, Witcher on Switch like PC hardware pretty please, can I see one?

Switch is a bad example kinda; PC's aren't using ARM processors (the TDP savings they'd bring would be phenomenal, but IPC performance between the best ARM and best x86 processors is still quite vast).

If CDPR wnated, they likely could come very close to Switch Witcher III on a PC of comparable spec, if they programmed against that specific build with low-level like APIs and gutting out any unnecessary OS features. But as it's been said multiple times, there would be no financial incentive for them to do so, therefore it's not gonna happen.

Financial incentive counts for probably 90% the reason any game gets the budget and resources it receives; a PC-exclusive dev targeting top-of-the-line PC hardware won't have access to a mass market for that game vs. ND targeting PS5 spec or Lionheart targeting Scarlet (or a Nintendo 1st-party targeting a Switch, even).
 

llien

Member
If CDPR wnated, they likely could...
Yep. And they would want, if there was a console with that hardware... oh wait... :)

PS
And I'm not buying "low level API", what freaking year it is, why are people expecting "api layers" to have major impact on performance? CDPR shrunk game complexity exactly to match what Switch can deliver, a luxury PC doesn't get.
 

VFXVeteran

Banned
Yep. And they would want, if there was a console with that hardware... oh wait... :)

PS
And I'm not buying "low level API", what freaking year it is, why are people expecting "api layers" to have major impact on performance? CDPR shrunk game complexity exactly to match what Switch can deliver, a luxury PC doesn't get.

You should get yourself a Vulkan API book. There is no bare metal coding anymore -- that part is understood. But there is a way to set up your rendering pipeline that could be more efficient by giving you more control of memory, threads, scheduling, etc.. at least that's what I read in my Vulkan book. There are a LOT of states to change in that API whereas DX9 didn't query the hardware so rigorously. Even the latest OpenGL that we use here is pretty high level compared to Vulkan.

I get what you are saying though. TBH, I think the major performance that can be gotten is from the shader programming and pipeline. I went to Microsoft one week for training on their HoloLens and they troubleshot one of their Unity applications where it had created HLSL from Unity UI for shaders. Just coding up the "optional" parts of the shader slowed down performance by 50%. They realized that making a tool that parsed the UI and made HLSL code with no options gave them huge gains in FPS.

Sorry for going OT guys..
 
Last edited:

VFXVeteran

Banned
Killzone also has ray-tracing for reflections since 2013:

Btw N Negotiator -- I read this article. That's not the "pure" ray-tracing I'm talking about. In order to satisfy the term "ray-tracing" and avoiding any kind of confusion in semantics, the following things have to be met to throw out that word nowadays:

1) You have to have objects stored into an 3D acceleration structure to test for intersections.
2) The ray has to start inside a given pixel, but it must be fired from the surface again in order to satisfy some function (i.e. hidden surface, reflections, etc..)
3) It has to be done in world space once the surface is about to evaluate it's shading equation (NOT 2D space)
4) When firing from the surface, it has to again build objects within another acceleration structure and the process happens again until a base case is reached or terminating limit.

2.5d space, texture space, pixel shader space, etc.. etc.. are not qualifiers for the term.

Lastly, this kind of feature has been done before with parallax occlusion mapping and many other 2d space techniques. I wouldn't call it an accurate example of using special hardware optimizations in order to implement a feature that can't be done otherwise.

As an aside, the big problem before RTX came out with these hardware graphics boards (console or otherwise) was that the software is completely at the mercy of the hardware pipeline rasterizer. You could only control shading through reaching the per-pixel pipeline phase. It didn't lend itself to any sort of world space shading (vertex side was inappropriate). That is completely no good for film tools that could use a realtime lighting features and the main reason pre-baked lighting has been in existance forever. That's why the main draw for film was GPGPU (i.e. CUDA). Now the path-tracers like Arnold have hardware accelerated functions merged in with CPU computations.

Anyway.. it was a good read.
 
Last edited:
Yep. And they would want, if there was a console with that hardware... oh wait... :)

PS
And I'm not buying "low level API", what freaking year it is, why are people expecting "api layers" to have major impact on performance? CDPR shrunk game complexity exactly to match what Switch can deliver, a luxury PC doesn't get.

Lower-level APIs were introduced to the PC space a couple years after Carmack made the tweet someone posted here in this thread. In all honesty, besides the fact that consoles have some of the best talent locked at their 1st party studios, who are afforded multi-millions of dollars in budget to code against a singular fixed platform spec (and can use lower-level API code if necessary), there isn't a lot separating current-gen consoles from PCs on a technological POV, except perhaps unified memory.

Which, again, isn't that big a deal with PC once you put enough memory into the box, plus there are still certain things synchronous memory like DDR will do better in than asynchronous memories such as GDDR. I'm just saying an obvious thing: if you took a PC dev, gave them a budget like you see Sony give their studios, and have them code against a single PC config (not giving a damn if it runs well on another config) with specs comparable to the next-gen systems, you're going to get a game with visuals and performance at least on par with the next-gen systems.

...never mind if that PC were set to specs beyond the next-gen systems; in that case you'd get results the consoles simply cannot match. And before people say there is no precedent for this, I think you better remember Crysis. That was a PC game that did for its time what is being mentioned here as a hypothetical, and was (on a technical/objective level) a league ahead of any of the most visually demanding console AAA exclusives of that time, and arguably that whole generation (even including late releases like TLOU and GT6).

If it's been done before, it can be done again, but there's no one around in the PC space quite like Crytek was in the mid '00s who'd take that initiative again (except perhaps the Star Citizen team).
 

llien

Member
But there is a way to set up your rendering pipeline that could be more efficient by giving you more control of memory, threads, scheduling, etc..
Yes. And that is AT ALL FEASIBLE for a concrete machine.
Which consoles are.
But PCs are not.

So have your slider with ineffective code and get over it.


Crysis. That was a PC game that did for its time what is being mentioned here as a hypothetical, and was (on a technical/objective level) a league ahead of any of the most visually demanding console AAA exclusives of that time, and arguably that whole generation (even including late releases like TLOU and GT6).
Crytek is an example of something NOT being optimized, to run smoothly. The main notable part of it was obnoxious hardware requirements due to it.
 

VFXVeteran

Banned
Yes. And that is AT ALL FEASIBLE for a concrete machine.
Which consoles are.
But PCs are not.

So have your slider with ineffective code and get over it.

Can you elaborate on a particular console game(either X1X or PS4) this current gen that gives better results which clearly shows "effective" code that couldn't be done on a PC?
 
Last edited:

llien

Member
Can you elaborate on a particular console game(either X1X or PS4) this current gen that gives better results which clearly shows "effective" code that couldn't be done on a PC?

You are missing the point.
It's not something that COULD NOT BE DONE.
It's something nobody does, because why waste time.
 

VFXVeteran

Banned
He claims that consoles are discrete machines -- meaning that you can write specific code to their hardware to achieve faster results vs a PC that has an API which doesn't allow this same low-level functionality like Vulkan, etc.. I'm asking him where was this done this current gen (since we have PS4/X1X) that clearly shows that.
 

VFXVeteran

Banned
You are missing the point.
It's not something that COULD NOT BE DONE.
It's something nobody does, because why waste time.

"So have your slider with ineffective code and get over it .."

But ok, even if you are correct, what difference does it make in the grand scheme of things? If you have a console that allows low-level access and the PC iteration doesn't but can still yield better results, what's the point?
 
Last edited:

llien

Member
But ok, even if you are correct, what difference does it make in the grand scheme of things? If you have a console that allows low-level access and the PC iteration doesn't but can still yield better results, what's the point?
I would remove "low level access" from the equation.
From my POV it's not "low level access", but targeting of the console hardware, that gives it an edge over PC.
It is just my opinion which, of course can be wrong.
 

VFXVeteran

Banned
I would remove "low level access" from the equation.
From my POV it's not "low level access", but targeting of the console hardware, that gives it an edge over PC.
It is just my opinion which, of course can be wrong.

Well, you are wrong if you can't prove what you are saying. I'm not trying to stir up the pot with console vs PC, but if you make a claim, some of us want proof for said claim.
 

Racer!

Member
He claims that consoles are discrete machines -- meaning that you can write specific code to their hardware to achieve faster results vs a PC that has an API which doesn't allow this same low-level functionality like Vulkan, etc.. I'm asking him where was this done this current gen (since we have PS4/X1X) that clearly shows that.

The 2x speed up Carmack talks about comes from having a fixed target. When you tailor the code (data stream) to one single known target, you get about 2x efficiency. Component and system wide.

Of course, you can write machine code on PC, you cant get any deeper than that. As for API`s you can get as low down on PC`s as console.
 
Last edited:

VFXVeteran

Banned
The 2x speed up Carmack talks about comes from having a fixed target. When you tailor the code (data stream) to one single known target, you get about 2x efficiency. Component and system wide.

Right, but my point was proving that the fixed target has an advantage by showing a game that uses the advantage. It really doesn't make much sense to praise the efficiency of the consoles when it's not used in a game. We can't just assume this is the case since there are a lot of cross-platform games and none of them have shown any advantage in efficiency.
 

VFXVeteran

Banned
I asked about Witcher on a PC with Switch like power.
Oh, it's ARM, doesn't matter, focus on GPU part.

I don't have the game on Switch (nor even the console) to see what they took out, so I'm not even going to pretend to know. Maybe someone who has the game can speak on that.
 
Back to PS5 / Xbox talk:

Anyone else got a sneaking suspicion that MS is going to go all out this gen? I reckon they're going to push more RAM, a faster CPU, and possibly even attempt to undercut the PS5 in price:

PS5 / Xbox Scarlett

RAM: 16+4 / 24+4
CPU: 2.8Ghz / 3.2Ghz
GPU: 10TF / 11TF
Price: £449 / £399

(and then Sony will be forced to lower their price before launch)

MS would aim to make up the difference by luring more gamers into their service offerings.
 
Back to PS5 / Xbox talk:

Anyone else got a sneaking suspicion that MS is going to go all out this gen? I reckon they're going to push more RAM, a faster CPU, and possibly even attempt to undercut the PS5 in price:

PS5 / Xbox Scarlett

RAM: 16+4 / 24+4
CPU: 2.8Ghz / 3.2Ghz
GPU: 10TF / 11TF
Price: £449 / £399

(and then Sony will be forced to lower their price before launch)

MS would aim to make up the difference by luring more gamers into their service offerings.
At this point, does MS cares? I mean there were talks from important shareholders about selling the XBOX brand a few years ago.

I feel like it's more about having a great tech to showcase in order to maximize both MS reputation and to help creating other great appealing products than being the 1st.
 
Last edited:
Yes. And that is AT ALL FEASIBLE for a concrete machine.
Which consoles are.
But PCs are not.

So have your slider with ineffective code and get over it.



Crytek is an example of something NOT being optimized, to run smoothly. The main notable part of it was obnoxious hardware requirements due to it.

If that's the case, then Crysis only supports my point even stronger. If the best-looking game of that generation (purely on technical/objective measures, if not necessarily art direction), and it was unoptimized...that means even weaker PC builds could've ran that game if it were optimized better. Yet all the same, we didn't see any console games that gen hit the technical benchmarks Crysis did back in '07.

So just imagining a modern-day Crysis fully optimized for a high-spec PC build with the best parts possible..you are going to get a game that on technical/objective levels, nothing from PS5 or Scarlet will be able to reach. They would have to optimize for their own systems their own way, with lots of artistic liberties taken. Which is why I'm only speaking from an objective POV, not subjective (art direction).

Back to PS5 / Xbox talk:

Anyone else got a sneaking suspicion that MS is going to go all out this gen? I reckon they're going to push more RAM, a faster CPU, and possibly even attempt to undercut the PS5 in price:

PS5 / Xbox Scarlett

RAM: 16+4 / 24+4
CPU: 2.8Ghz / 3.2Ghz
GPU: 10TF / 11TF
Price: £449 / £399

(and then Sony will be forced to lower their price before launch)

MS would aim to make up the difference by luring more gamers into their service offerings.

They might do that for America and UK, which have traditionally been their strongest markets, but in other regions I don't think they'd undercut the BOM and would likely sell at a profit in those regions.
 
Last edited:
Status
Not open for further replies.
Top Bottom