• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Microsoft confirms 12 TF RDNA 2 for the Xbox Series X

Shin

Banned
The knowledge and expertise Microsoft has gained from all those years in the PC space is vast and wide.
It's a blessing I'd say that consoles started using PC like parts, because Xbox stands to reap the profit the most IMO.

DirectML has the potential to be a gamechanger for next-generation hardware, allowing developers to exploit the powers of AI and machine learning to make games more efficient. DirectML has the potential to bring Microsoft's next-generation console to a new level, making the Xbox Series X a lot stronger than its 12 TFLOPS graphics processor.

We've spoken about Microsoft's DirectML before at OC3D, but as a whole, DirectML has received little attention from the media. DirectML isn't a confirmed feature of Microsoft's next-generation console, but the timing of DirectML's development suggests that hardware support is planned for the Xbox Series X.

 
500 GB SSD would make it


I am now convinced that Lockhart is not a thing as a physical console. I think it's just the xCloud server blade that will replace the current One S server blades. The fact that seemingly no one has a Lockhart dev kit tells me that devs can't optimize for it. So it looks to be an automatic process. This suggests that some kind of quality check needs to happen at Microsoft, like they do with the BC program.

If Lockhart is not a console (the Series naming convention allows for there to exist other devices) that means Microsoft is aiming for 399. Take the Kinect away from a One and the launch price would have been that. Also note that there might be a very good reason why Microsoft waited. An APU on 7N+ would be cheaper over lifetime than a 7nm APU. Or offer more power at the same price. Don't be fooled into thinking that more power = higher cost. The 1.3 TF Xbox One APU cost 10 % more than the 1.84 TF PS4 APU. By planning for a 2020 launch vs. a 2019 launch Microsoft could have achieved that, a 12 TF console for 399. With a 500 GB SSD of course. And 16 GB RAM.

I'm almost certain this is the case now. That graphic mentioned TDP down, and that's not necessarily something of a bullet point for consoles. So I think Lockhart is one of two things:

(1): A Switch-like hybrid. Dock it and it can give about 3.96TF, undock it and it's go to 2.x (can't remember what the actual clocks were but those are figures I calculated at the time of seeing it). It'd basically be a next-gen Switch just not from Nintendo, and leagues more powerful than the current Switch while maybe going for anything between $299 and $399.

Also think this could be something benefiting the MS/Samsung partnership recently announced. They (Samsung) have to be getting more out of it than just pre-installed Xcloud app. Why not have Samsung basically manufacture Lockhart as an extended OEM partner and be the sole distributor in Asia? Lockhart would have a crazy-good chance of success with Samsung handling it in those regions. Could also mean MS and Samsung have figured out a way of splitting royalties on software and subscriptions in those locations between the two.

Think of it like how SEGA would have Hitachi, JVC etc. manufacture their own Saturns (or what Trip Hawkins and EA did with the 3DO), but done by much larger companies with all the kinks ironed out and new forms of revenue having opened up in the past decade (and, let's face it, much better management, both at Microsoft and Samsung, vs. mid-'90s SEGA).

(2): A Surface-like gaming orientated tablet/laptop. Could be the second option. I think MS would handle this moreso themselves, but still leveraging partnerships such as with Samsung, obviously, probably via better integration of Xcloud streaming between the device and Samsung phones, etc. Being gaming-orientated it would basically be running the same OS as XSX and be a closed box, but I can see MS and other partners releasing compatible apps that are basically versions of Office and other productivity software.

This way, they can bring in PC-like functionality while still maximizing control of the storefront for games, XBL, Gamepass, Xcloud etc.. It'd be like perfecting the concept of Steam machines in a way, as well as a better implemented idea of Sony's Linux support for PS2 and PS3, controlled so that it couldn't be exploited by hackers since it's not REALLY running Windows as a PC would, rather running the same Xbox OS XSX will use, just having Windows-style apps developed for it and made compatible. Of course they'd still have to address things like sharing and transferring files between the device and actual PCs, Macs, phones etc, but I think it would be possible.

It's still retain the portability function, too, and be positioned as a very powerful Surface/laptop-like device priced around $399. This would be the gaming-orientated SKU, going without the productivity software suite and lacking the features that would bring. They could also sell a version aimed more at the conventional Surface/tablet/laptop market priced higher (maybe $799/$899) but has all of those apps pre-installed and full functionality enabled.

There's something else I thought about with this option: the device would have a top-mounted replaceable keyboard panel. This way the cheaper version ($399) would have a "keyboard" arrangement like a modern-day arcade stick, combining the face and shoulder buttons on the layout logically, and maybe twin touchpads (to simulate twin sticks) or a touchpad & Vita-style thumb slide stick, plus some d-pad buttons as well, and a Home button like you'd have on an Xbox controller. The pricier version of the device ($799/$899) would have a more "normal" laptop keyboard-style setup, aimed at productivity.

I bring up the second option at a higher price because if the cheaper version had all the productivity functionality already there, it would eat into their other Surface product lines, given the graphical power of the device in that type of form factor. I see this option overall being less likely than (1), though, for those reasons plus others. But I thought it was worth throwing out there ;)
 
I see some tech guys having aneurysms reading this lol
FLOPS as a measure were used left and right in supercomputers which all have vastly different architectures.
Essentially each one is a different architecture.
If you want to quote the book. Just do it here.
Flops is a measure of performance that specific computing architecture cannot exceed. That's about it.

Man, I'm too lazy to do this.
Luckily I found someone that did the excercises.
Well, first the book has to explain the pitfall of using a subset of the performance equation as a performance metric (pages 50 to 51 and page 52 for the big picture part), and then it proposes an alternative, MIPS, which destroys later:


VdlBzoK.png


NEpvt1Q.png



Then, in the excercises is proved that MIPS isn't a good performance measuring metric between different processors (page 58).
Later, it presents MFLOPS which isn't a good performance measuring metric either because it has the same problems as MIPS (excercise 1.12.3 and 1.12.4 are the only ones we care about, but the others are useful for context):


r8iwyLe.png


Solutions:

C4Gzqy2.png


By excercise 4 it's shown that P1 has higher MFLOPS compared to P2, but P2 is better than P1 (P2 makes in 0.225 seconds what P1 does in 1.125 seconds).
So no, you shouldn't use MFLOPS (or GFlops, TFLOPS, etc.) to compare processors because depending on the architecture, you achieve a result unrelated to performance.
The problem is that it's an easy term to understand, but wrongly used for comparisons between different architectures.

Source:
Computer Organization and Design (5th edition) by David Patterson and John Hennessy
solutions to excercises
 

CyberPanda

Banned
I'm almost certain this is the case now. That graphic mentioned TDP down, and that's not necessarily something of a bullet point for consoles. So I think Lockhart is one of two things:

(1): A Switch-like hybrid. Dock it and it can give about 3.96TF, undock it and it's go to 2.x (can't remember what the actual clocks were but those are figures I calculated at the time of seeing it). It'd basically be a next-gen Switch just not from Nintendo, and leagues more powerful than the current Switch while maybe going for anything between $299 and $399.

Also think this could be something benefiting the MS/Samsung partnership recently announced. They (Samsung) have to be getting more out of it than just pre-installed Xcloud app. Why not have Samsung basically manufacture Lockhart as an extended OEM partner and be the sole distributor in Asia? Lockhart would have a crazy-good chance of success with Samsung handling it in those regions. Could also mean MS and Samsung have figured out a way of splitting royalties on software and subscriptions in those locations between the two.

Think of it like how SEGA would have Hitachi, JVC etc. manufacture their own Saturns (or what Trip Hawkins and EA did with the 3DO), but done by much larger companies with all the kinks ironed out and new forms of revenue having opened up in the past decade (and, let's face it, much better management, both at Microsoft and Samsung, vs. mid-'90s SEGA).

(2): A Surface-like gaming orientated tablet/laptop. Could be the second option. I think MS would handle this moreso themselves, but still leveraging partnerships such as with Samsung, obviously, probably via better integration of Xcloud streaming between the device and Samsung phones, etc. Being gaming-orientated it would basically be running the same OS as XSX and be a closed box, but I can see MS and other partners releasing compatible apps that are basically versions of Office and other productivity software.

This way, they can bring in PC-like functionality while still maximizing control of the storefront for games, XBL, Gamepass, Xcloud etc.. It'd be like perfecting the concept of Steam machines in a way, as well as a better implemented idea of Sony's Linux support for PS2 and PS3, controlled so that it couldn't be exploited by hackers since it's not REALLY running Windows as a PC would, rather running the same Xbox OS XSX will use, just having Windows-style apps developed for it and made compatible. Of course they'd still have to address things like sharing and transferring files between the device and actual PCs, Macs, phones etc, but I think it would be possible.

It's still retain the portability function, too, and be positioned as a very powerful Surface/laptop-like device priced around $399. This would be the gaming-orientated SKU, going without the productivity software suite and lacking the features that would bring. They could also sell a version aimed more at the conventional Surface/tablet/laptop market priced higher (maybe $799/$899) but has all of those apps pre-installed and full functionality enabled.

There's something else I thought about with this option: the device would have a top-mounted replaceable keyboard panel. This way the cheaper version ($399) would have a "keyboard" arrangement like a modern-day arcade stick, combining the face and shoulder buttons on the layout logically, and maybe twin touchpads (to simulate twin sticks) or a touchpad & Vita-style thumb slide stick, plus some d-pad buttons as well, and a Home button like you'd have on an Xbox controller. The pricier version of the device ($799/$899) would have a more "normal" laptop keyboard-style setup, aimed at productivity.

I bring up the second option at a higher price because if the cheaper version had all the productivity functionality already there, it would eat into their other Surface product lines, given the graphical power of the device in that type of form factor. I see this option overall being less likely than (1), though, for those reasons plus others. But I thought it was worth throwing out there ;)
Nice post
 

HolyTruth

Banned
Anyone have a list of the panels or sessions they had planned before the cancelation?

not sure. But even facebook will have a digital event instead;

Facebook says that it is still planning to make announcements at GDC around its Oculus business, but will now do that via digital formats with 'video, online Q&As and more'. The firm says it is removing its booth footprint and advising all employees to refrain from travelling to the show.

I still can’t fathom why Sony didn’t doing the same thing.
 

Kagey K

Banned
not sure. But even facebook will have a digital event instead;

Facebook says that it is still planning to make announcements at GDC around its Oculus business, but will now do that via digital formats with 'video, online Q&As and more'. The firm says it is removing its booth footprint and advising all employees to refrain from travelling to the show.

I still can’t fathom why Sony didn’t doing the same thing.
I was just curious to see what devs are actually missing out on. Since they seem to be the only big company not keeping with the timeline.
 

psorcerer

Banned
Man, I'm too lazy to do this.
Luckily I found someone that did the excercises.
Well, first the book has to explain the pitfall of using a subset of the performance equation as a performance metric (pages 50 to 51 and page 52 for the big picture part), and then it proposes an alternative, MIPS, which destroys later:


VdlBzoK.png


NEpvt1Q.png



Then, in the excercises is proved that MIPS isn't a good performance measuring metric between different processors (page 58).
Later, it presents MFLOPS which isn't a good performance measuring metric either because it has the same problems as MIPS (excercise 1.12.3 and 1.12.4 are the only ones we care about, but the others are useful for context):


r8iwyLe.png


Solutions:

C4Gzqy2.png


By excercise 4 it's shown that P1 has higher MFLOPS compared to P2, but P2 is better than P1 (P2 makes in 0.225 seconds what P1 does in 1.125 seconds).
So no, you shouldn't use MFLOPS (or GFlops, TFLOPS, etc.) to compare processors because depending on the architecture, you achieve a result unrelated to performance.
The problem is that it's an easy term to understand, but wrongly used for comparisons between different architectures.

Source:
Computer Organization and Design (5th edition) by David Patterson and John Hennessy
solutions to excercises

Oh. God. So much bullshit in that book.
They measure performance by benchmarking and they state that there is "required number of instructions" which is clearly false.
Number of executed instructions solely depends on quality of algorithm implementation that was used.
I.e. their measure of "execution time" has the same problems as all the other methods.
More than that, it has a human factor inside: was an implementation really optimal for that hardware or not? Obviously from the computer theory you cannot measure that (Halting problem).
So in the end what they state is trivial: if you have a specific implementation for each specific hardware the most reliable method of comparison is to measure the run time.
The problem is: it's also false.
For example: memory access. You cannot measure total execution time without the time for side-effects. I.e. CPU with worse memory subsystem will have bigger execution time if the implementation is large enough. Etc. Etc.
 
Oh. God. So much bullshit in that book.
They measure performance by benchmarking and they state that there is "required number of instructions" which is clearly false.
Number of executed instructions solely depends on quality of algorithm implementation that was used.
I.e. their measure of "execution time" has the same problems as all the other methods.
More than that, it has a human factor inside: was an implementation really optimal for that hardware or not? Obviously from the computer theory you cannot measure that (Halting problem).
So in the end what they state is trivial: if you have a specific implementation for each specific hardware the most reliable method of comparison is to measure the run time.
The problem is: it's also false.
For example: memory access. You cannot measure total execution time without the time for side-effects. I.e. CPU with worse memory subsystem will have bigger execution time if the implementation is large enough. Etc. Etc.



 

Alx

Member
The knowledge and expertise Microsoft has gained from all those years in the PC space is vast and wide.
It's a blessing I'd say that consoles started using PC like parts, because Xbox stands to reap the profit the most IMO.




I just browsed the 2018 SIGGRAPH video they linked in that article, and it lead me thinking... When MS announced that all BC games would be improved when running on next gen, I assumed they would be up-rendered like they are one OneS/X. But maybe thay can be sure of the support by putting them through ML post-progressing to be upscaled. Maybe it's even something they intend to do with new games, having the game engine render at a base resolution, and then ML-upscale them. That would be a game changer (pun not intended), and would actually make it harder for sites like Digital Foudry to estimate the native resolution of a game.
 

psorcerer

Banned
I just browsed the 2018 SIGGRAPH video they linked in that article, and it lead me thinking... When MS announced that all BC games would be improved when running on next gen, I assumed they would be up-rendered like they are one OneS/X. But maybe thay can be sure of the support by putting them through ML post-progressing to be upscaled. Maybe it's even something they intend to do with new games, having the game engine render at a base resolution, and then ML-upscale them. That would be a game changer (pun not intended), and would actually make it harder for sites like Digital Foudry to estimate the native resolution of a game.

Why do ML upscale if it can be just rendered in higher res?
And then add some effects like the crap that's posted in PC screenshots on this forum.
 
Happily, this is a complete hundred and eighty degree turn from the original Xbox One philosophy.

"We purposefully did not target the highest end graphics. We targeted it more as a broad entertainment play. And did it in an intelligent way and focused on key aspects of the IP for that."



 

Alx

Member
Why do ML upscale if it can be just rendered in higher res?
And then add some effects like the crap that's posted in PC screenshots on this forum.

Well ML upscale would have the same computational cost whatever the content of the image, while rendering at a higher resolution can have different consequences depending on what's happening in the engine. Also for BC games, you can't be sure that everything can be emulated at higher resolutions. Some of the assets were meant for the original target (like 2D sprites and menus for example), and in case of dirty devs, there is always the risk of hardcoded values that won't "work" with new resolutions.
The real question is how much latency the ML processing adds to the rendering (it's probably mentioned somewhere in the video, but I haven't watched it entirely)

*e : also since current games usually go the "dynamic resolution" path, it might be interesting to have the dynamic part a bit smarter and say "ok I don't have enough resources to do a full render in time, but I think I can do a low res render + ML upscale". (disclaimer : that's pure speculation on my part though, I have no idea how easy it would be to put the upscaling directly into a game rendering pipeline)
 
Last edited:

HolyTruth

Banned
Mwahahahaha. :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy:
I'm dying here.

huh? Why? Nvidia is already using ML to enhance low res content to 4K in the Nvidia shield tv PRO. Looks way better than any scaler out there.
I can imagine that Microsoft will do the same

Automatically ALL old gen games will look better, almost like native 4K!

this what Microsoft maybe meant by „no developer input needed“. Same with Nvidia shield tv pro.
Of course true 4K is always better BUT ML enhanced 4K is way way better than regular upscaling. Way better.
At least with Nvidia shield tv PRO on my 65“ LG OLED W9
 
Last edited:

HolyTruth

Banned
Why do ML upscale if it can be just rendered in higher res?
And then add some effects like the crap that's posted in PC screenshots on this forum.

what if you don’t want dev input? And you have old games with very low res? No dev will touch an old again. This way all games could look better- automagically. No dev input needed.
Nvidia is doing the same with movies on their Nvidia shield tv pro. I have one and it’s great.
 

psorcerer

Banned
huh? Why? Nvidia is already using ML to enhance low res content to 4K in the Nvidia shield tv PRO. Looks way better than any scaler out there.
I can imagine that Microsoft will do the same

Automatically ALL old gen games will look better, almost like native 4K!

this what Microsoft maybe meant by „no developer input needed“. Same with Nvidia shield tv pro.
Of course true 4K is always better BUT ML enhanced 4K is way way better than regular upscaling. Way better.
At least with Nvidia shield tv PRO on my 65“ LG OLED W9
what if you don’t want dev input? And you have old games with very low res? No dev will touch an old again. This way all games could look better- automagically. No dev input needed.
Nvidia is doing the same with movies on their Nvidia shield tv pro. I have one and it’s great.
Because DLSS can look better than native resolution.

That's all so interesting.
What you're saying is that ML/DLSS and other stupid computer algorithms can create a better picture than the army of artists in a typical game studio.
Dunno. Maybe the bar is so low nowdays.
But I think you cannot replace people in studios like Naughty Dogs with a mere shellscript. 😂
 

psorcerer

Banned
Well ML upscale would have the same computational cost whatever the content of the image, while rendering at a higher resolution can have different consequences depending on what's happening in the engine. Also for BC games, you can't be sure that everything can be emulated at higher resolutions. Some of the assets were meant for the original target (like 2D sprites and menus for example), and in case of dirty devs, there is always the risk of hardcoded values that won't "work" with new resolutions.
The real question is how much latency the ML processing adds to the rendering (it's probably mentioned somewhere in the video, but I haven't watched it entirely)

*e : also since current games usually go the "dynamic resolution" path, it might be interesting to have the dynamic part a bit smarter and say "ok I don't have enough resources to do a full render in time, but I think I can do a low res render + ML upscale". (disclaimer : that's pure speculation on my part though, I have no idea how easy it would be to put the upscaling directly into a game rendering pipeline)

Pretty smart. But same fallacy: developers are so stupid, that an ML algorithm can optimize their own games much better.
And developers are so stupid, that ML can manage resources much better in their own game!
Last time I've checked fat APIs that decide for the developer what needs to be done are history now. Vulkan gives total control over every aspect of the pipeline.
But here you want to return to the safe space of DX9 where only Nvidia with literally 1000s of engineers could optimize the games. Stinks of the crappy NV PR too much...
 
Last edited:

HolyTruth

Banned
That's all so interesting.
What you're saying is that ML/DLSS and other stupid computer algorithms can create a better picture than the army of artists in a typical game studio.
Dunno. Maybe the bar is so low nowdays.
But I think you cannot replace people in studios like Naughty Dogs with a mere shellscript. 😂

no. That’s not what I’m saying.
Let’s say there is an OG Xbox Game. A very old one. Do you really think a dev would sit down and create the game from scratch? Lol

It’s just a way to optimize/upscale old games.
Nvidia is doing the EXACT same thing with Nvidia shield tv pro but for movies.
Old low res movies look much better with this on 4K TVs
 

Alx

Member
Pretty smart. But same fallacy: developers are so stupid, that an ML algorithm can optimize their own games much better.
And developers are so stupid, that ML can manage resources much better in their own game!
Last time I've checked fat APIs that decide for the developer what needs to be done are history now. Vulkan gives total control over every aspect of the pipeline.
But here you want to return to the safe space of DX9 where only Nvidia with literally 1000s of engineers could optimize the games. Stinks of the crappy NV PR too much...

It's not about being stupid. When a developer designed a game for a platform, especially a gaming console the specs of which aren't (weren't) supposed to evolve in time, he didn't need to consider other contexts. Typically games on Xbox 360 were not developed with an 8K screen in mind. So when they designed a UI that looks good in 720p, it wasn't being stupid, that's what they were supposed to do at the time. Still 15 years later it would be nice to have the same work look good on modern screen without having to make them go back to their code, rework the assets and rebuild the game.
As for developers of new games, giving them the ability to have their game look good at higher resolutions with no additional work isn't calling them stupid (especially when there's nothing forcing them to do it). Devs can still take the time to optimize their game for 8K natively. Or they can also say "I'll focus on 4K and I know the ML upscale will do a good enough job for 8K".
 
Last edited:
Oh. God. So much bullshit in that book.
They measure performance by benchmarking and they state that there is "required number of instructions" which is clearly false.
Number of executed instructions solely depends on quality of algorithm implementation that was used.
I.e. their measure of "execution time" has the same problems as all the other methods.
More than that, it has a human factor inside: was an implementation really optimal for that hardware or not? Obviously from the computer theory you cannot measure that (Halting problem).
So in the end what they state is trivial: if you have a specific implementation for each specific hardware the most reliable method of comparison is to measure the run time.
The problem is: it's also false.
For example: memory access. You cannot measure total execution time without the time for side-effects. I.e. CPU with worse memory subsystem will have bigger execution time if the implementation is large enough. Etc. Etc.

Well I don't know, you are arguing against the fathers of MIPS, RISC and RAID, among other recognitions.
I think they might know a little about what they are talking...
 

darkinstinct

...lacks reading comprehension.
That's all so interesting.
What you're saying is that ML/DLSS and other stupid computer algorithms can create a better picture than the army of artists in a typical game studio.
Dunno. Maybe the bar is so low nowdays.
But I think you cannot replace people in studios like Naughty Dogs with a mere shellscript. 😂

No, an algorithm powered by 10.000x the performance of a console can anticipate a better result than just running on a single console. It's not about artists. It's about deciding which color any given pixel should have. The way (very abstract) your console works is it has a choice of four colors based on the depth buffer and surrounding pixels - and then selects one either at random or always the one on the right side. With deep learning you don't take a semi-random result, you take the most likely result based on previous experience. So when you have fine detail, the algorithm knows that this fine detail needs to be preserved and is less likely to be ignored.
 

Hobbygaming

has been asked to post in 'Grounded' mode.
yeah, this streaming doesn’t make any sense lol.
But everything Else, Yeah.

this would also explain why Sony is so silent.
They released the wired article in October. Two months later the GitHub leak happened. Since then: Sony is silent.
Why? Maybe because they realized how much stronger series X is? And that’s why they really can’t reveal anything because they are working to make PS5 strong so that it doesn’t look too weak compared to series X.
It would explain why Sony was so open first (wired articles) and then - as soon as the github leak happened - Sony is silent.
Sony went silent before that leak though

Also, believe in Shinobi :)
 

xool

Member
Last edited:

HolyTruth

Banned
The quality games from WWS a.k.a the best 1st party studios in gaming

nintendo would like a word with you.
also, nothing for PS5 announced so far. So who knows. Current gen exclusive games at the beginning of the gen weren’t that great, the order and driveclhb were massive failures.

but we will see.
 

psorcerer

Banned
It's not about being stupid. When a developer designed a game for a platform, especially a gaming console the specs of which aren't (weren't) supposed to evolve in time, he didn't need to consider other contexts. Typically games on Xbox 360 were not developed with an 8K screen in mind. So when they designed a UI that looks good in 720p, it wasn't being stupid, that's what they were supposed to do at the time. Still 15 years later it would be nice to have the same work look good on modern screen without having to make them go back to their code, rework the assets and rebuild the game.
As for developers of new games, giving them the ability to have their game look good at higher resolutions with no additional work isn't calling them stupid (especially when there's nothing forcing them to do it). Devs can still take the time to optimize their game for 8K natively. Or they can also say "I'll focus on 4K and I know the ML upscale will do a good enough job for 8K".

Okay. If it's solely for old games that were created on an order of magnitude weaker hardware I agree that's it's a pretty good path.
 


Haven't linked a DF video in a while. Mainly speculation on performance of 4TF Lockhart (Series S) in comparison to the X, using PC hardware at settings to hit possible Lockhart targets.

Wish Richard speculated about it being a hybrid portable or even Surface-like device x3. But at least it seems that latest APU leak (the one with the white and red text on black background, though that's probably taking the information from some spreadsheet on a database) looks likely to be related to Series S and others are considering that more seriously now.
 

Hobbygaming

has been asked to post in 'Grounded' mode.
nintendo would like a word with you.
also, nothing for PS5 announced so far. So who knows. Current gen exclusive games at the beginning of the gen weren’t that great, the order and driveclhb were massive failures.

but we will see.
Nintendo is good but I think of them as a secondary console

The Order 1886 was profitable and still has potential as a new IP if they were to give it another shot. It's also not made by a 1st party studio

Naughty Dog, Insomniac, Santa Monica are some of the best in gaming and studios like Bend, Sucker Punch and Guerrilla will most likely rise to top studios next generation
 
Last edited:

HolyTruth

Banned
Sony has the higher rated exclusives, more GOTY nominated games and they simply have the more talented studios

IMHO they also have a lot of overrated games. Especially when you play them after all the buzz, you see how boring they actually are. Nothing really new. Can’t believe how high the ratings for uncharted are.
Nintendo exclusive tho are really something special.
 
Dunno. It's a textbook for students.
Maybe it's watered down for kids.
Man, you're such a stubborn person.
It's difficult to admit that you're wrong isn't it?
I presented you evidence that throws away your assumption that all gflops are the same, but you respond citing, badly understood, the halting problem.
And yes, cpu time takes into account cache hit and miss times, including sublevels and also if it's needed to get data from memory.
By your comments it seems you don't know what you are talking about or you learned it wrongly.
I hope you some day study with the kids and learn it right.
 

CyberPanda

Banned


Haven't linked a DF video in a while. Mainly speculation on performance of 4TF Lockhart (Series S) in comparison to the X, using PC hardware at settings to hit possible Lockhart targets.

Wish Richard speculated about it being a hybrid portable or even Surface-like device x3. But at least it seems that latest APU leak (the one with the white and red text on black background, though that's probably taking the information from some spreadsheet on a database) looks likely to be related to Series S and others are considering that more seriously now.

Good video. Thanks for posting.
 

psorcerer

Banned
Man, you're such a stubborn person.
It's difficult to admit that you're wrong isn't it?
I presented you evidence that throws away your assumption that all gflops are the same, but you respond citing, badly understood, the halting problem.
And yes, cpu time takes into account cache hit and miss times, including sublevels and also if it's needed to get data from memory.
By your comments it seems you don't know what you are talking about or you learned it wrongly.
I hope you some day study with the kids and learn it right.

There is no evidence there.
They assume that there exists an optimal number of instructions. Which is not true in the real world. It's exactly the same as O() notation which works pretty well in a "kids" case. But has a lot of problems in more advanced cases (mostly around O(1))
Of course wall clock measures memory access latencies, pipeline stalls, etc. That's why it's a bad measure of max theoretical performance.
If your benchmark is spending 90% of time waiting for memory you're not measuring CPU FP performance at all.
 
People keep saying that they don't trust Microsoft to f*** up again...but look at what they've done:

-People criticize the lack of exclusives, so Microsoft invested in multiple dev studios, some of which have created some of the best games of this and the prior generation.
-People criticized the lack of power in the XBox One, so they pumped out the XBO:X, and it's easily the most powerful console this gen *AND* it appears that Microsoft will have the most powerful console from day one for next gen as well.
-People criticized having to buy games over again across generations, so they're getting with studios to make it so you don't get "Skyrimmed" this generation (unless the dev/publisher are greedy f***s).
-People criticized the lack of communication from Microsoft regarding their console and how they didn't respond to fan criticism and concerns--they've been pretty damn transparent and open so far, and they've issued multiple mea culpas for how the XBox One went down (even though the architect of that crapfest of a launch has since left Microsoft to go sabotage other companies).

Honestly, if people aren't giving Microsoft any benefit of the doubt at this point that they'll make something spectacular, then nothing will sway them.
It's called moving the goalpost. Many are not interested in Xbox no matter what. Just concern trolling for many.
 
Top Bottom