• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

The Verge: Xbox Series S (7.5GB usable RAM, 4TF GPU, Up: Same CPU)

SlimySnake

Flashless at the Golden Globes
I think the 4 tf xbox gpu will be more powerful than 1X. RDNA1 was a 50% increase over GCN. 4 x .5=2. 4+2=6. 6TF GCN

But RDNA2 promises a 50% increase over RDNA1. Plus the hardware goodies built in and Zen. The 4TF xbox will be very capable and way better than current gen 1X.

The PS5 and SeriesX are more powerful than a lot realize.
Your numbers are a bit off. Rdna is 25% increase per flop compared to polaris cars which went into the pro and xsx, not 50%.

Rdna 2 is 50% power efficient, that's thermal power, not graphics rendering power. We don't know how much better it is over rdna 1.0 per flop but 10-25% should be on the cards.
 

Dolomite

Member
Good luck to devs willing to make games that will run to the best possible performance on both 7.5GB RAM and 4TF but also 13.5GB RAM, and 12FT. If this is true, MS are pretty much asking devs to make 2 versions of each game. Of course what will really happen is that they'll just make the higher spec version then quickly disable and lower stuff till it runs ok on Lockhart. So games are probably going to look like rushed Switch ports.
Quite the opposite. Scaling game engine is done on the platform side this gen, as a tool MS offers Devs when working in the Xbox ecosystem. Dirt 5 Dev explains it a bit more at around 12:15 seconds ( I swear he almost slips up, it seemed like he was going to mention the series a by name as well)
He the way he describes the scalability of the unified MS GDK sounds just like the scaling we saw on the UE5 engine. Top down. High end PC to mobile


@ 12:15
 

Gamerguy84

Member
Your numbers are a bit off. Rdna is 25% increase per flop compared to polaris cars which went into the pro and xsx, not 50%.

Rdna 2 is 50% power efficient, that's thermal power, not graphics rendering power. We don't know how much better it is over rdna 1.0 per flop but 10-25% should be on the cards.

It came from Toms Hardware. I tried cooying the link but its an amp link. Google rdna vs gcn and click on toms.

The way I read it is 50% more performance per watt.


0NEJ7eH.jpg
 

SlimySnake

Flashless at the Golden Globes
It came from Toms Hardware. I tried cooying the link but its an amp link. Google rdna vs gcn and click on toms.

The way I read it is 50% more performance per watt.


0NEJ7eH.jpg
Yes. Performance per watt means it's offering more performance for the same amount of power consumption. So if rdna 1. was consuming 180w for a 8 tflops card. Rdna 2.0 would be 8 tflops for just 120w.

What it would NOT be is offer 12 tflops of power for the same number of tflops.

What you are looking for is performance per clock. That shows the performance per flop metric we can use to figure out what an rdna 2 tflops is worth.
 

Kazza

Member
Why are some people making such a big deal about the number of teraflops all of a sudden? Haven't we all learned from countless GAF threads by now that it is the SSD which is the most important component of a console? So long as the S has the same SSD as the X, then surely the performance will be more or less the same, right?
 

THE DUCK

voted poster of the decade by bots
This is a great idea, there are millions of buyers for this, all the people who normally buy a console when it hits for $199-$250 price point are potential buyers.

Think this Christmas in Canada, Joe casual gamer who owns a regular ps4 or xbox one, plays once a month, can buy a series x or ps5 for $699 canadian......but covid times are unstable and money is a bit tight. Talking $780 bucks with tax.
But he sees an ad for the new series s, ssd, ray tracing, trailer looks decent and its still a nice upgrade over his current system.
$329-$349 canadian and he's pretty happy. Even more so if there's some ai 4k scaling magic dust in the mix......

Also tired of hearing how making a game for multiple platforms means it ruins the most powerful version, this has been and will continue to be false.
 
Last edited:

93xfan

Banned
1) PC minimum specs can be set to match whatever devs want
2) PC still requires optimization to run properly, so another, much weaker SKU will require a lot of attention

-Optimization is supposed to be made a lot easier with built in tools. It’s all rumored at this point as the Series S doesn’t even officially exist yet.

-What makes you think this will be tougher or even as tough as making low, medium, high and ultra settings?
 

93xfan

Banned
No, high budget games are designed around consoles.

But they’re usually made to be able to scale to lower end PCs.

Anyway, this does not make it impossible to make games with the XSX as the lead platform.

The only thing I guess I could see being a limiting factor would be if a developer wants to use a ton of RAM for things that are not textures and don’t scale with resolution. Then RAM could be an issue. I don’t foresee that being an issue on the vast majority of games.
 
Didn't the guy during the UE5 tech demo on PS5 say that they were streaming data (all that virtual texturing and micropolygons) from the SSD on a PER FRAME basis??

Not per second but frames instead!

It was fast enough that the UE5 PS5 demo was designed with 8K TEXTURES! That wouldn't be possible without the streaming from the SSD. That is what UE5 was doing on PS5 and that will apply not only to Series X but for Series S also.

So I'm not concerned about the 7.5 GB in Series S. The SSD on the next gen consoles isn't a luxury but a necessity. They will act in some ways (like Mike Cerny said) more like RAM to compensate for the feeble jump in RAM compare to previous gens.

The Series S will have the same ridiculously fast I/O as PS5 and Series X and that is what will more than compensate for the lower amount of memory compared to One X.
 
Last edited:

Gamerguy84

Member
Yes. Performance per watt means it's offering more performance for the same amount of power consumption. So if rdna 1. was consuming 180w for a 8 tflops card. Rdna 2.0 would be 8 tflops for just 120w.

What it would NOT be is offer 12 tflops of power for the same number of tflops.

What you are looking for is performance per clock. That shows the performance per flop metric we can use to figure out what an rdna 2 tflops is worth.

Right but wouldnt it be better than a 6TF GCN card is what I was saying. I mean the 5700 XT 9.?(RDNA1) ran pretty comparsble with a radeon 7 13.8 GCN TF card
 
One X is a 4K console, and has 9 GB of usable ram for games. Lockhart outputting at quarter resolution could do with less than that, especially when Velocity Arch is helping conserve texture memory with sampler feedback and using high speed paged approach.

Machine with slightly lowered CPU clocks (similar to PS5 CPU clocks), 512GB SSD, 4+TF GPU, 10 GB RAM could be around $200-250 while still be able to give a great next gen experience at lower resolutions compared to XSX. And with DLSS 2.0 type upscaling, it could rival or even outperform PS5.

Remember:

control-3840x2160-ray-tracing-nvidia-dlss-2.0-performance-mode-performance.png
Stop this bullshit. Going by your non sense logic, Microsoft could have just made a 6TF console that can surpass the PS5 for $350 and not bother with XSX.
 
The PS5 only has less than 16 GB GDDR6 ( I don't know how much is reserved for the OS).

Is that amount of memory (around 12 or 13 GB) enough to display that insane amount of polygons that were bathed in those glorious 8K textures???

The blazing fast SSDs of the next gen consoles is what allowed Epic to display those 8K textures all over the place in the Unreal Engine 5 demo. These consoles will punch well above their 16 GB weight (or Series S's 10 GB) when it comes to texture resolution and variety.
 

seanoff

Member
one can only hope they name it Fred or something.

the Xbox naming system is fucking hideous.

but it will end up being Xbox Series X One or something equally silly.
 

CrustyBritches

Gold Member
Quick example:
XSS should be >= RX 580
XSX should be >= RTX 2080

Average frame rate in 23 game benchmark suite:
RX 580 @ 1080 = 75fps
RTX 2080 @ 4K = 67fps

I'm checking out VRAM usage and my inclination is that they'd be much better served with the originally rumored 12GB mem instead of 10GB mem. Hopefully this rumor is incorrect about that aspect.
 

Kumomeme

Member
to those who think optimization is easy, even compare it to pc, as just reduce or increase the setting/preset like the "magic slider", remember Assasins Creed Unity? how terrible that game optimization to the point all the magical 'sliders' provided in the game wont do jackshit.

thats came from well known big studio with hundreds of manpower across the globe

before there is devs that tell us optimization process is not easy..for example former FFXV lead designer Wan Hazmer descibe the process as “HELL ON EARTH” ...and there developers told that optimization is not easy for smaller studio especially indie

dont forget how capcom manages to miss proper patch for xbox one x version of Resident Evil 3 (it already fixed now).
thats for stronger console refresh version.. i wonder how it would do for weaker one...not to mention later when both company release another stronger midgen refresh, there will be 3 SKU from MS that devs need to optimize for, which is if we look at at current gen, there still lot of devs dont bother to release upgrade patch.
 
Last edited:

Dante83

Banned
If a slightly reduced cpu speed is 3.5 ghz, then it's cpu is just as good as the ps5. The gpu will be just right, so this console could be a replacement of the Xbox one series of consoles. MS could phase out the Xbox one series of consoles since the price of the Series S is rumoured to be quite low.
 

S0ULZB0URNE

Member
One X is a 4K console, and has 9 GB of usable ram for games. Lockhart outputting at quarter resolution could do with less than that, especially when Velocity Arch is helping conserve texture memory with sampler feedback and using high speed paged approach.

Machine with slightly lowered CPU clocks (similar to PS5 CPU clocks), 512GB SSD, 4+TF GPU, 10 GB RAM could be around $200-250 while still be able to give a great next gen experience at lower resolutions compared to XSX. And with DLSS 2.0 type upscaling, it could rival or even outperform PS5.

Remember:

control-3840x2160-ray-tracing-nvidia-dlss-2.0-performance-mode-performance.png
this-is-ur-brain-on-drugs-just-say-no
 

jaysius

Banned
Lockhart is the name of the profile of the Xbox series X uses for backwards compatibility.
 
Last edited:

DrAspirino

Banned
Good luck to devs willing to make games that will run to the best possible performance on both 7.5GB RAM and 4TF but also 13.5GB RAM, and 12FT. If this is true, MS are pretty much asking devs to make 2 versions of each game.
Ehhrmm... nope. The devkits that are out there feature two profiles on the same machine: one for Series X and one for "lockhart".

The Verge said:
Developers will be able to use this Lockhart mode to test their games against this performance profile and do validation checks.

So, in theory (and may as well in practice), devs would have to write the code once and then test on both profiles. If on the Series X profile they hit 4k/60fps, they may call it a day and test it with the lockhart profile. If it reaches 1080p/60 or 1080p/30, then it's done. x86 is a scalable architecture and game engines like Unreal and Unity are scalable af (heck, they can go from mobile all the way up to cinema).
 
Lockhart is still superior.
Because of its CPU and SSD? I don't mind admitting I'm wrong, but I thought TF was a measurement of the compute units and clock speed of the GPU (CU's x CS / 1M), but I didn't look at the actual numbers that aren't TF related, so forgive me.
 
Last edited:

DrAspirino

Banned
Isn't the secret to this being that the APU is going to be the same as the series X but chips that lost the Binned lottery?
Exactly. Instead of discarding the "failed" ones, just rebrand them, install a different microcode and use them anyway. Intel does that all the time, same as AMD. Heck, I still remember the time when some Radeon HD5050 could be upgraded to 5070 (or something like that) just by changing the firmware IF you were lucky enough that your chip wasn't all defective.
 

CRAIG667

Member
switch not sit in same 'market' as sony/ms where the audience preferences is different than the one nintendo cater around

there is reason why nintendo avoided that realm..they did it since Wii
I aint the only person who owns all consoles, price is a HUGE factor too.
 

Naru

Member
I am always like "hell yeah if that price is true I am going to pick one up 100%" and then I remember I have a decent PC and that there will be nothing on there that isn't also on PC...
 

Mister Wolf

Member
Good move for parents with young children who dont understand or give a shit about things like 4K and raytracing. That stuff is for us 20 and 30 year olds.
 

SlimySnake

Flashless at the Golden Globes
Right but wouldnt it be better than a 6TF GCN card is what I was saying. I mean the 5700 XT 9.?(RDNA1) ran pretty comparsble with a radeon 7 13.8 GCN TF card
Radeon 7 is a vega card and 5700xt is roughly 10-15% slower than the radeon 7.

the gpu in the xbox one is Polaris GCN. DF was able to downclock the 36 CU 5700 to match the 36 CU Polaris 580 and found that it was roughly 25-30% faster at the same cu count and clock speeds. that gets us to 5 polaris gcn tflops for a 4 tflops rdna 1.0 gpu. we dont know how much faster rdna 2.0 could be but i would suspect its on par with 6 polaris gcn tflops since MS likely wants the xss to replace the x1x.
 
D

Deleted member 775630

Unconfirmed Member
People know the deal with that, you're buying a console near to the end of life so you can't expect any different.

If they are to go down the route you are suggesting then it would be unwise of them to market Lockhart as a next generation console alongside the Series X (e.g. "Exactly the same minus the resolution and cost") since it would mislead people.
True, but maybe they'll push it as "After 3-4 years it's a streaming xCloud box"? Would still be a bit misleading though, but that would work to make the device not obsolete.
 

Tulipanzo

Member
How can devs can heavily dislike something that is not real?
In any case, series s, if it exists, will be easily more powerful than 20/30% pcs in the market. So for devs it wont be much different that support multiple pc specs.
It seems to be real unfortunately, but even if it were cancelled devs have already been briefed on it, and they were REALLY negative about it (it was reportedly cancelled after feedback).

The PC comparison is rather moot:
1) Devs can change minimum required specs if needed
2) PCs still require optimization for stuff to run well, adding more on top of that will create issues
3) It's already under powered now, it will only get worse over the gen

This in a background of MS announcing it (E3 18), cancelling it (E3 19), uncancelling it (Game Awards) and refusing to announce it despite overwhelming evidence. They look far from confident here.
Imagine if Sony, right now, had not even talked once about the PS5, or even if it existed, and you get how ludicrous their strategy looks.
 

Tulipanzo

Member
-Optimization is supposed to be made a lot easier with built in tools. It’s all rumored at this point as the Series S doesn’t even officially exist yet.

-What makes you think this will be tougher or even as tough as making low, medium, high and ultra settings?
While the super easy toolset you're citing is in a fact a totally made-up rumour (WC only says it's weaker), developers complaints about Lockhart have been well reported since it was revealed, then cancelled, then uncancelled, then never announced.
A major reason we might not have seen it yet is poor results from devs.

Secondly, it's harder than PC because it's a closed box.
On PC, you can just give the option to run a game at 720p 30fps, with a lot of granularity since users will then tweak and tinker to make the game run as they want. You can also change minimum settings to make work more feasible.
Lockhart will HAVE to run the same stuff, for the whole gen, as a box that's 3 times as powerful, with a lot more RAM and extra CPU power. This will require a lot of bespoke optimization.
Furthermore, we're already seeing new tech (UE5 and Minecraft RT demos) that will push Lockhart well south of 1080p: running stuff at 720p is ok on PCs, but it will be poison on a console.
 

Men_in_Boxes

Snake Oil Salesman
I'm going to end it all if 8 TF are needed to get from 1080p to 4k.

Please tell me Lockhart is a 30fps machine as well?
 
Damn, a lot of people in here screaming about Lockhart holding next gen back. Think of it like this.

We have Series X

4k aimed device
Zen 2 CPU @ 3.5 - 3.8GHz
12TFlop GPU
16GB RAM
1GB SSD


Then Lockart aka Series S

1080p aimed device
Same CPU as Series X
4TFlop GPU (1/3rd the power of the Series X targeting 1/4 the resolution)
Less RAM (looks like 10GB from leak)
Same SSD speed as Series X (not necessarily same capacity)

Games scale on the gpu side. As long as the cpu and ssd is present and running at the same speed then this isn’t holding anything back. Nothing at all. On the RAM side the Series S doesn’t need as much RAM as it won’t be making use of higher resolution textures. We see this on pc all the time.

If the CPU is clocked lower then this is a potential issue as game logic most likely will have to designed around the slowest system. Likewise on the inclusion of the SSD.

Lets wait and see what it is first though. A 1080p aimed console @249 that is small, quiet with the power of the Xbox One X is a win come holiday this year. Especially when parents baulk at the cost of the full next gen Machines @399 - 499.
 
They are doing a 4K console and a 1080p console. Pretty obvious. Will just download the correct assets for each one. Not a big deal. Most people are still on 1080p.
 

THE:MILKMAN

Member
Damn, a lot of people in here screaming about Lockhart holding next gen back. Think of it like this.

We have Series X

4k aimed device
Zen 2 CPU @ 3.5 - 3.8GHz
12TFlop GPU
16GB RAM
1GB SSD


Then Lockart aka Series S

1080p aimed device
Same CPU as Series X
4TFlop GPU (1/3rd the power of the Series X targeting 1/4 the resolution)
Less RAM (looks like 10GB from leak)
Same SSD speed as Series X (not necessarily same capacity)

Games scale on the gpu side. As long as the cpu and ssd is present and running at the same speed then this isn’t holding anything back. Nothing at all. On the RAM side the Series S doesn’t need as much RAM as it won’t be making use of higher resolution textures. We see this on pc all the time.

If the CPU is clocked lower then this is a potential issue as game logic most likely will have to designed around the slowest system. Likewise on the inclusion of the SSD.

Lets wait and see what it is first though. A 1080p aimed console @249 that is small, quiet with the power of the Xbox One X is a win come holiday this year. Especially when parents baulk at the cost of the full next gen Machines @399 - 499.

The problem here is how Microsoft can get such a console out at $249 or even $299 without losing a lot of money?

I've done a more detailed (but still missing some parts!) BOM breakdown and the lowest I can get a possible Lockhart to is ~$250 before packaging/shipping/retail cut/marketing etc.

I also assumed $70 for the SoC, $50 for the SSD and $40 for the 10GB RAM so being pretty generous here.

If Lockhart is being setup to being a subscription console at $20-$25/month all in for the cash strapped masses going into a recession I can see this either backfiring spectacularly or being a huge success. Or more likely a bit of both at the same time!
 

Hayriko

Member
People complaining about the series S. Then why did sony announced a discless ps5?

Because they somehow know that ms is planning to release a cheaper console.
So releasing a digital edition will help sony counter the series S in a way

Unless sony goes for more storage in the discless ps5.

I also do know that there a people who will go for the digital only

P.S: This is just a theory that I was thinking about.
 

SaucyJack

Member
People complaining about the series S. Then why did sony announced a discless ps5?

Because they somehow know that ms is planning to release a cheaper console.
So releasing a digital edition will help sony counter the series S in a way

Unless sony goes for more storage in the discless ps5.

I also do know that there a people who will go for the digital only

P.S: This is just a theory that I was thinking about.

A diskless SKU with otherwise identical specs and a 1/3 the power console are not exactly the same proposition.
 

Nikana

Go Go Neo Rangers!
Because of its CPU and SSD? I don't mind admitting I'm wrong, but I thought TF was a measurement of the compute units and clock speed of the GPU (CU's x CS / 1M), but I didn't look at the actual numbers that aren't TF related, so forgive me.

The CUs in RDNA 2 are much larger and more capeable that GCN. Meaning that a 4TF clock on RDNA 2 is about the same as 6TF on GCN but it has all the features of RDNA2.
 

Nikana

Go Go Neo Rangers!
While the super easy toolset you're citing is in a fact a totally made-up rumour (WC only says it's weaker), developers complaints about Lockhart have been well reported since it was revealed, then cancelled, then uncancelled, then never announced.
A major reason we might not have seen it yet is poor results from devs.

Secondly, it's harder than PC because it's a closed box.
On PC, you can just give the option to run a game at 720p 30fps, with a lot of granularity since users will then tweak and tinker to make the game run as they want. You can also change minimum settings to make work more feasible.
Lockhart will HAVE to run the same stuff, for the whole gen, as a box that's 3 times as powerful, with a lot more RAM and extra CPU power. This will require a lot of bespoke optimization.
Furthermore, we're already seeing new tech (UE5 and Minecraft RT demos) that will push Lockhart well south of 1080p: running stuff at 720p is ok on PCs, but it will be poison on a console.

Jason Schreier is not a real source.
 

Thirty7ven

Banned
The CUs in RDNA 2 are much larger and more capeable that GCN. Meaning that a 4TF clock on RDNA 2 is about the same as 6TF on GCN but it has all the features of RDNA2.

It’s more that the unit of measurement that is a Teraflop is a theoretical one that discards the reality of the hardware. Not only does RDNA2 have a more advanced feature set than GCN, but the architecture is more efficient, allowing performance to more closely match the theoretical TFLop count.
 

Armorian

Banned
And resolution and framerates are the easiest things to scale, I can see XSX titles just being Lockhart titles in 4K which would be bad IMO
Why would you need that amount of RAM if your output is targeted for 1080p?

I think it's going to be a great console. Probably going to get one for my home office, and the XSX for the living room.

You need it for designing games. This is the difference in memory usage in games based on resolution alone, one is 2560x1080 and the other on 5120x2160 (basicaly UW versions of 1080p and 4K).

gta2k.png
gta4k.png


GTA 5 - ~2.1GB difference

mgs2k.png
mgs4k.png


MGS5 - ~2.3GB difference

nfs2k.png
nfs4k.png


NFSH - 3.7GB difference

r2k.png
r4k.png


Ryse - ~2GB difference

rrr2k.png
rrrr4k.png


ROTTR - ~2GB difference (with HIGH textures, VH takes ~7GB on 2560x1080!)

Framebuffer alone is usually not jumping more than 3GB. RAM is unified on consoles so if devs want to use low resolution with some fancy image reconstruction (like DLSS) to have more main memory available they are fucked on the spot by XSS that cuts RAM by 6GB, they will have to use PS3 quality textures here :messenger_tears_of_joy:
 
Last edited:

Razvedka

Banned
I have doubted this thing existing since I first heard it. Doubly so after Spencer doubled down on how everything in the XSX is static and stable in terms of specs, differentiating it with PS5 variable clock.

All that sort of rhetoric sort of implodes with the existence of this thing, as it does mean more work for devs if MS demands all titles must run on both machines.

I understand the arguments made that if it's targeting 1080p it should look the same etc. For the most part I think that holds true, but it still means some extra amount of developer effort to make that happen.

I can see why they'd risk it if they could get the price down low enough. But I do think it'd need to be aggressively priced.
 

splattered

Member
Quick example:
XSS should be >= RX 580
XSX should be >= RTX 2080

Average frame rate in 23 game benchmark suite:
RX 580 @ 1080 = 75fps
RTX 2080 @ 4K = 67fps

I'm checking out VRAM usage and my inclination is that they'd be much better served with the originally rumored 12GB mem instead of 10GB mem. Hopefully this rumor is incorrect about that aspect.

Does it show frame rate for RX580 @ 1440?

I'll be getting the Series X for myself and my wife, and then the Series S for the kids.

I'm hoping the Series S at least shoots for 1440 and 60fps.
 

Genx3

Member
No way. Unless Microsoft like throwing billions away. disabling half of a 360mm^2 chip sounds like a colossal waste to me. It surely has to be a discrete/unique SoC around 200mm^2 as I understand these things?

So you're saying it is cheaper to completely throw away binned SOCs instead of repackage them and sell them for less?
 

Genx3

Member
Good move for parents with young children who dont understand or give a shit about things like 4K and raytracing. That stuff is for us 20 and 30 year olds.
If LH exists it will be perfectly capable of Ray Tracing but likely toned down a bit.

I'm going to end it all if 8 TF are needed to get from 1080p to 4k.

Please tell me Lockhart is a 30fps machine as well?

LH will likely run games at the same frame rate as XSX. Remember 4K takes roughly 4X the GPU resources that 1080P needs.
With it rumored to have the same CPU slightly down clocked, same type of Ram and the same SSD it should theoretically play the same games as XSX at 1080P.
 

Fake

Member
Was him rollback his statment and saying both Series X and Series S having the same CPU clock speed?
 

Marlenus

Member
If this thing is real they can just as well cancel next gen because aside from PS5 exclusives most games will be only marginal jump.



You don't need fast CPU when you have GPU that can't put out fast framerates.

If they want a 1080p machine and a 4k machine the CPUs need to be really close to make sure that gameplay loops dont need to be compromised to make the game work.

If slightly lower clocks means 3.4-3.5Ghz with smt on and 3.6ish smt off then its probably close enough given the upto 3.5Ghz for the PS5 CPU. If it is slower than this it means the disparity between the systems needs to be more than just resolution which will lead to more time spent optimising and less time spent adding polish.
 
Top Bottom