• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
Well, I think it's safe to say that the Github leak needs to...

giphy.gif
 

SlimySnake

Flashless at the Golden Globes
SlimySnake SlimySnake

Does 12TF now seem kinda conservative/weak with the revelation of 50% better perf/watt???

Nobody was expecting anything close to those gains
it does. unless my math is completely off, 12 tflops doesnt seem too hot.

lets pull up the simulating gonzalo chart and apply the 50% perf/watt increase to these numbers to get an idea of what it takes to get 12 and 14 tflops.

resultsshjg4.png


so now all of a sudden, that 2.0 ghz that was taking 161w should now take only 106w. zen 2 should only be 20w. maybe even less. gddr6 and the rest should be 30-50w and you can have a nice 170w console.

if MS is at 56 cu at 1.7 ghz, they are only around 97w for 12 tflops.

now the question is why would sony go with an 106w 9 tflops gpu when they can go with a 12 tflops 97w gpu. is 15% more die space really that expensive?

56 cu at 2.0 tflops should be around 148w. thats a 200-220w console. i really dont see why they cant do this.

edit: this also proves you wouldnt need a super expensive fancy cooling solution to cool that gpu. unless its really 56 cus running at 2.0 ghz.
 
Last edited:
Not really dx12 crap again. AMD claiming they co developed an API with Microsoft. Ray tracing is a new technology for developers in the console space. Sony making its own API that's interesting.!

We have to wait and see if PS5 can match the Xbox series ray tracing performance. Sony uses different APIs for their console, so AMD would have to be involved in development of new API to be compatible . Right now AMD only confirmed working with MS to get the best performance from their silicon.

Sony already has it's own API for PS4 .
When comes to RT they could use vulkan or OpenGL or even do there own thing which they have been doing since PS1.
 
Found a very good post (among lots of others) over on Beyond3D that should put the Ariel/Oberon stuff in much better perspective for people who don't understand what the Oberon tests were actually testing. Post is from user AbsoluteBeginner



This is basically the best working case for Oberon that can also fit what most insiders have been saying, too. That Oberon's tests were running regression tests set to Ariel iGPU. Since Ariel was a RDNA1-based chip, it did not have RT/VRS built into it. Even if Oberon has RT/VRS (in fact it's pretty damn guaranteed now after today's AMD Financials thingy), they would not be enabled for running Ariel iGPU regression; even users here like R600 mentioned this months ago.

It also would indicate that the Oberon tests that have been datamined so far do not tell everything about the chip. They may or may not mention the chip CU count (IIRC the first Oberon stepping listed "full chip" with its log), but we've already seeing later steppings change the memory controller to increase the bandwidth to the chip. We don't know if Oberon has an extra cluster of CUs disabled on the chip with later steppings beyond the very first one, but I'm thinking if there were, they would have been from the 2nd stepping onward, and I would think something like that'd call for a chip revision instead of just another stepping, but I dunno. Even so, we don't know how many additional CUs are present, if present.

And something else to consider: I saw some people mentioning AMD mentioned "multi-GHz GPUs" during a segment for GPU products and systems releasing this year? Did that happen? If so I don't think they would mention the phrase if they weren't talking 2GHz or greater, and we know Oberon has a clock at 2GHz. And now we practically know PS5 is RDNA2 which has upwards 50% more efficiency versus RDNA1. That would obviously also shift the sweetspot northward, too, which makes an RDNA2 chip at those clocks a lot more feasible. It's still something maybe crazy, but not as crazy as a lot of people were thinking before today's news, eh?

Although that actually asks an interesting question about why XSX's clocks are "so low" if RDNA2 efficiency is so much better. Either the 50% claim over RDNA1 is AMD PR talk, or MS felt no need to push the clock higher and chose guaranteed stability at a cooler GPU clock. However, that obviously also means they went with their design in the case of upping the clocks if Sony outperformed them on GPU front regarding TFs. The fact they seemingly have gone with a 1.675GHz - 1.7GHz clock on an RDNA2 chip (with the sweetspot probably shifted a good bit northward from the 1.7GHz - 1.8GHz of RDNA1) might hint that they are fairly certain they have the stronger of the two machines, but the question is now by how much? (also I kinda shamelessly took the idea of XSX clocks and their indication of anything relative to PS5 from another post over there, but I thought it was worth thinking about).

So yeah, there are still a lot of unknowns, but given Oberon E0 was tested into December of last year, I'm pretty much 100% sure Oberon is the PS5 chip. However, I'm also pretty much 100% sure we haven't really seen a benchmark testing for Oberon, just the Ariel iGPU profile regressed on Oberon, meaning we haven't seen the entirety of the chip (I think this is exactly why Matt also said "disregard it" in reference to Github, because it wasn't testing the full chip or even much anything of the chip outside of Ariel iGPU). And that's the fun part, because it can run a wide gamut. However, I think, knowing RDNA2 efficiency and XSX's pretty "tame" GPU clock, and the fact high-level MS and Sony people would know a lot more about each other's systems than any of us, that might signal MS is comfortable with the lower clock because they're fairly certain they at least have the bigger chip. Whether that means PS5 is 36/40 or (like a die estimate from a few months ago speculated) 48CUs, or maybe even to the very low 50s, is unknown.

That's why I've been rolling with 48CUs as Oberon's actual size, and they'll probably disable four for yields. @ 2GHz that actually hits around 11.26TF which is better than my earlier numbers, even. It does kinda depend on Oberon's full size being 48 however, and if they can actually keep the 2GHz clock stable because that is probably still a tad north of RDNA2's upper sweetspot range.

Either way I think we can ALMOST certainly put the 9.2TF PS5 talk to rest now, but funnily enough today's news just reaffirms the datamines, the leak and even the insiders if there's more to Oberon in terms of CUs than the initial test that showed 40 as the "full chip" (which, to be perfectly fair, could have just been referencing the Ariel iGPU profile, since Ariel is a 40CU RDNA1 chip). And being 100% fair, while I do think MS clocking XSX as low as it is (1.675GHz - 1.7GHz) is both odd and maybe indicative they're comfortable they have a performance edge over PS5, Oberon could also be a 58 or 60 CU chip if we're being honest, because again there's the whole butterfly thing and 18x3 gives you 54. So it could be more a case MS knows they have an advantage right now but Sony could have upped performance and then you get MS responding by having headroom to push their clocks higher.

Or it could even be a case that maybe MS don't know as much about PS5 as some think but they might know Oberon is also a big chip, and they want to see for certain where PS5 actually lands by throwing 12TF out there. So if PS5 reveals their number and its the same or somewhat larger, MS can enable an upclock on the GPU to match or surpass that. And I would think they have already tested the GPU at higher clocks by now just in case that type of scenario plays out. That's the other way to see their announcement from last week, anyway.

But again, it all hinges on what Oberon actually fully is, and we'll only know for sure if another benchmark test gets datamined that isn't running the chip on an Ariel iGPU profile. Which maybe could come this week, or within the next few weeks. Hopefully soon. If it does and we still see it's a max 40CU chip, then it's time for people to accept that. If it' a larger chip, but at around 48CUs, then they could either be running it with 4 CUs disabled or all 48 on and that would get them between 11.26TF - 12.28TF @ 2GHz, aka virtually identical to XSX. If it's even larger, like a 60CU chip, and they're running at @2GHz even in that case, then it just means MS can upclock the XSX at a rate they've already internally tested as a contingency plan to close the performance gap because anything beyond 2GHz with a console-like form factor is probably gonna melt silicon.

Thing is, all three of those scenarios have an even chance of playing out, and we're only going to get a better, fuller indication a few weeks from now. Don't throw away one of those possibilities even if you prefer another, because there honestly isn't a very strong reason to throw any of these scenarios out of the window just yet.

but we CAN throw out the idea PS5 isn't using RDNA2, that much is essentially official.

Good post. And not sure if you saw this, but I posted it a few days ago here. It‘s an AMD link that shows in AMD terminology, a letter change is a full model revision, each with a new model number. The number changes are the steppings. https://www.neogaf.com/threads/next...-analysis-leaks-thread.1480978/post-257184963
 

DeepEnigma

Gold Member
Not really dx12 crap again. AMD claiming they co developed an API with Microsoft. Ray tracing is a new technology for developers in the console space. Sony making its own API that's interesting.!

We have to wait and see if PS5 can match the Xbox series ray tracing performance. Sony uses different APIs for their console, so AMD would have to be involved in development of new API to be compatible . Right now AMD only confirmed working with MS to get the best performance from their silicon.

That’s because Sony hasn’t announced their specs yet like Microsoft did. And this has to be talked about for the PC arena like MS did when working with nVidia for it.

Both companies are assigned teams from AMD to work on the silicon and customizations. This isn’t Sony’s first rodeo with ray tracing. They’ve been showing it off every generation since the PS2 at every GDC. Been talking about RT since the PS2 days, and heavier from the PS3 gen on.

Cerny has been talking about it for over a decade. What do you think they will be doing? Sitting on their hands?
 
Last edited:

Darklor01

Might need to stop sniffing glue
So far, this just continues to be in line with the statements certain members of this forum have been saying for a really really long time now. Once the TF numbers are known, we can see if they were right that they are really close in power.
 

saintjules

Member
Found a very good post (among lots of others) over on Beyond3D that should put the Ariel/Oberon stuff in much better perspective for people who don't understand what the Oberon tests were actually testing. Post is from user AbsoluteBeginner



This is basically the best working case for Oberon that can also fit what most insiders have been saying, too. That Oberon's tests were running regression tests set to Ariel iGPU. Since Ariel was a RDNA1-based chip, it did not have RT/VRS built into it. Even if Oberon has RT/VRS (in fact it's pretty damn guaranteed now after today's AMD Financials thingy), they would not be enabled for running Ariel iGPU regression; even users here like R600 mentioned this months ago.

It also would indicate that the Oberon tests that have been datamined so far do not tell everything about the chip. They may or may not mention the chip CU count (IIRC the first Oberon stepping listed "full chip" with its log), but we've already seeing later steppings change the memory controller to increase the bandwidth to the chip. We don't know if Oberon has an extra cluster of CUs disabled on the chip with later steppings beyond the very first one, but I'm thinking if there were, they would have been from the 2nd stepping onward, and I would think something like that'd call for a chip revision instead of just another stepping, but I dunno. Even so, we don't know how many additional CUs are present, if present.

And something else to consider: I saw some people mentioning AMD mentioned "multi-GHz GPUs" during a segment for GPU products and systems releasing this year? Did that happen? If so I don't think they would mention the phrase if they weren't talking 2GHz or greater, and we know Oberon has a clock at 2GHz. And now we practically know PS5 is RDNA2 which has upwards 50% more efficiency versus RDNA1. That would obviously also shift the sweetspot northward, too, which makes an RDNA2 chip at those clocks a lot more feasible. It's still something maybe crazy, but not as crazy as a lot of people were thinking before today's news, eh?

Although that actually asks an interesting question about why XSX's clocks are "so low" if RDNA2 efficiency is so much better. Either the 50% claim over RDNA1 is AMD PR talk, or MS felt no need to push the clock higher and chose guaranteed stability at a cooler GPU clock. However, that obviously also means they went with their design in the case of upping the clocks if Sony outperformed them on GPU front regarding TFs. The fact they seemingly have gone with a 1.675GHz - 1.7GHz clock on an RDNA2 chip (with the sweetspot probably shifted a good bit northward from the 1.7GHz - 1.8GHz of RDNA1) might hint that they are fairly certain they have the stronger of the two machines, but the question is now by how much? (also I kinda shamelessly took the idea of XSX clocks and their indication of anything relative to PS5 from another post over there, but I thought it was worth thinking about).

So yeah, there are still a lot of unknowns, but given Oberon E0 was tested into December of last year, I'm pretty much 100% sure Oberon is the PS5 chip. However, I'm also pretty much 100% sure we haven't really seen a benchmark testing for Oberon, just the Ariel iGPU profile regressed on Oberon, meaning we haven't seen the entirety of the chip (I think this is exactly why Matt also said "disregard it" in reference to Github, because it wasn't testing the full chip or even much anything of the chip outside of Ariel iGPU). And that's the fun part, because it can run a wide gamut. However, I think, knowing RDNA2 efficiency and XSX's pretty "tame" GPU clock, and the fact high-level MS and Sony people would know a lot more about each other's systems than any of us, that might signal MS is comfortable with the lower clock because they're fairly certain they at least have the bigger chip. Whether that means PS5 is 36/40 or (like a die estimate from a few months ago speculated) 48CUs, or maybe even to the very low 50s, is unknown.

That's why I've been rolling with 48CUs as Oberon's actual size, and they'll probably disable four for yields. @ 2GHz that actually hits around 11.26TF which is better than my earlier numbers, even. It does kinda depend on Oberon's full size being 48 however, and if they can actually keep the 2GHz clock stable because that is probably still a tad north of RDNA2's upper sweetspot range.

Either way I think we can ALMOST certainly put the 9.2TF PS5 talk to rest now, but funnily enough today's news just reaffirms the datamines, the leak and even the insiders if there's more to Oberon in terms of CUs than the initial test that showed 40 as the "full chip" (which, to be perfectly fair, could have just been referencing the Ariel iGPU profile, since Ariel is a 40CU RDNA1 chip). And being 100% fair, while I do think MS clocking XSX as low as it is (1.675GHz - 1.7GHz) is both odd and maybe indicative they're comfortable they have a performance edge over PS5, Oberon could also be a 58 or 60 CU chip if we're being honest, because again there's the whole butterfly thing and 18x3 gives you 54. So it could be more a case MS knows they have an advantage right now but Sony could have upped performance and then you get MS responding by having headroom to push their clocks higher.

Or it could even be a case that maybe MS don't know as much about PS5 as some think but they might know Oberon is also a big chip, and they want to see for certain where PS5 actually lands by throwing 12TF out there. So if PS5 reveals their number and its the same or somewhat larger, MS can enable an upclock on the GPU to match or surpass that. And I would think they have already tested the GPU at higher clocks by now just in case that type of scenario plays out. That's the other way to see their announcement from last week, anyway.

But again, it all hinges on what Oberon actually fully is, and we'll only know for sure if another benchmark test gets datamined that isn't running the chip on an Ariel iGPU profile. Which maybe could come this week, or within the next few weeks. Hopefully soon. If it does and we still see it's a max 40CU chip, then it's time for people to accept that. If it' a larger chip, but at around 48CUs, then they could either be running it with 4 CUs disabled or all 48 on and that would get them between 11.26TF - 12.28TF @ 2GHz, aka virtually identical to XSX. If it's even larger, like a 60CU chip, and they're running at @2GHz even in that case, then it just means MS can upclock the XSX at a rate they've already internally tested as a contingency plan to close the performance gap because anything beyond 2GHz with a console-like form factor is probably gonna melt silicon.

Thing is, all three of those scenarios have an even chance of playing out, and we're only going to get a better, fuller indication a few weeks from now. Don't throw away one of those possibilities even if you prefer another, because there honestly isn't a very strong reason to throw any of these scenarios out of the window just yet.

but we CAN throw out the idea PS5 isn't using RDNA2, that much is essentially official.

Nicely written.
 

Zero707

If I carry on trolling, report me.
But the XSX APU is huge, unless they are severely underclocking it then 14TF seems possible
i'm pretty sure MS can clock XSX higher but they decided to stick with 12 TF also if they clock it higher especially for bigger chips will bring them worse yields and i believe thay want it the console quiter
 
Last edited:

Old Empire.

Member
That’s because Sony hasn’t announced their specs yet like Microsoft did. And this has to be talked about for the PC arena like MS did when working with nVidia for it.

Both companies are assigned teams from AMD to work on the silicon and customizations. This isn’t Sony’s first rodeo with ray tracing. They’ve been showing it off every generation since the PS2 at every GDC. Been talking about RT since the PS2 days, and heavier from the PS3 gen on.

Cerny has been talking about it for over a decade. What do you think they will be doing? Sitting on their hands?

You’re expecting Sony co authored and co developed an API with AMD also? What if they haven’t? You also assuming Sony a software program developer when there not. They are hardware developer. Sony Devs are educated in graphic development and working with current API’S. Adding a new API with new features for ray tracing is not as easy as you think. There lot of things that go wrong performance wise and heat increases.

Time will reveal all because third party games will be using ray tracing and we see the performance for both consoles.
 

Gamernyc78

Banned
Sony already has it's own API for PS4 .
When comes to RT they could use vulkan or OpenGL or even do there own thing which they have been doing since PS1.

I said it earlier Sony's API solutions are either as good or better smfh. Dx12 gave no advantage as Sony created theur own apis tht were great and efficient.

Like ppl don't get tired of spewing same shit every gen? Xbox will have no advantage on the software API front Sony has some of the smartest and best engineers and software ppl console wise in the industry, competent enough to create apis on par or superior and features including ray-tracing. Sony has been involved in ray-tracing for years.
 
Last edited:

ethomaz

Banned
BTW the math for the 50% perf/watt works like that.

1/1.5 = 66%
That means the 50% perf/watt is equal to 66% of power consumption from RDNA to RDNA 2.

Eg.

RDNA: 200W
RDNA2: 133W

Both at same TFs.

Using the RX 5700XT 225W the same chip (CUs, clock, etc) in RDNA 2 should have a TDP around 150W.
 
Last edited:

Neofire

Member
It's not a big failure mate. I'd say it's average/above average.
I agree wholeheartedly that's your opinion but many have their own as well. Mine being that it's below average, the amount of R&D Microsoft spent and no to mention marking. The effort does not make the results.
 
  • Thoughtful
Reactions: TLZ

DeepEnigma

Gold Member
I said it earlier Sony's API solutions are either as good or better smfh. Dx12 gave no advantage as Sony created theur own apis tht were great and efficient.

Like ppl don't get tired of spewing same shit every gen? Xbox will have no advantage on the software API front Sony has some of the smartest and best engineers and software ppl console wise in the industry, competent enough to create apis on par or superior and features including fimor ray-tracing.. Sony has been involved in ray-tracing for years.

They only make CGI movies with it, and have been for a long time.

And they restructured their company a few years ago so all the entertainment industry’s work together more cohesively and share tech.
 

alex_m

Neo Member
Remember when Lisa Su called their successor to RDNA on last years presentations "Next Gen"?

It should have been obvious that she was talking about AMD's cooperation with Sony and Microsoft.

That said, both companies are getting customized APUs from AMD. Customized means both companies added some secret sauce while collaborating with AMD. Let's see which team did a better job here. My bets are on Sony.

About DifitalFoundry: Those AAA-holes better get back on counting frames and comparing screenshots. They are not credible in regards of technology. They have some explaining to do in their next video. I bet Leadbetter will say, that they clearly said that they could be wrong in their 9.2TF analysis video. I wouldn't call it clearly when everyone cited them as being a credible proof for PS5 being inferior to the competition.

What I find interesting is when Lisa Su said 150 mio units shipped. VGChartz lists 107 mio PS5 sold. This means Microsoft only sold 43 mio units during it's life cycle. Thats only 40% of PS5 sales, or 29% of total current gen console sales. Or in other words, Sony sold 2.5 times more units.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
BTW the math for the 50% perf/watt works like that.

1/1.5 = 66%
That means the 50% perf/watt is equal to 66% of power consumption from RDNA to RDNA 2.

Eg.

RDNA: 200W
RDNA2: 133W

Both at same TFs.
ah i knew i was missing something.

so if the 2.0 ghz 40 cu was at 161w on rdna 1.0, on rdna 2.0 it would be 106w.
56 cu at 2.0 ghz would be 225w on rdna 1.0, on rdna 2.0, it would be 148w. thats very high for a gpu, but not impossible. still a 200w console if they go with hbm.
56 cu at 1.7 ghz would be 148w on rdna 1.0, on rdna 2.0, 97w. really not that bad.
 

James Sawyer Ford

Gold Member
BTW the math for the 50% perf/watt works like that.

1/1.5 = 66%
That means the 50% perf/watt is equal to 66% of power consumption from RDNA to RDNA 2.

Eg.

RDNA: 200W
RDNA2: 133W

Both at same TFs.

Using the RX 5700XT 225W the same chip (CUs, clock, etc) in RDNA 2 should have a TDP around 150W.

Which begs the question, why is XSX only 12TF, has a massive APU die size, and a massive tower design????
 

Old Empire.

Member
Sony already has it's own API for PS4 .
When comes to RT they could use vulkan or OpenGL or even do there own thing which they have been doing since PS1.

Full quote
We also provide lower-level API support, that gives more control to the developers so that they can extract more performance from the underlying hardware platform. This will help mitigate the [ray tracing] performance concern. [...] The latest Microsoft DXR 1.1 API was co-architected and co-developed by AMD and Microsoft to take full advantage of the full ray tracing architecture.

AMD co- architected and co developedan API for Sony or they did not. Just something to watch out for. Ray tracing is not performance free!
 
Last edited by a moderator:

Gamernyc78

Banned
Full quote
We also provide lower-level API support, that gives more control to the developers so that they can extract more performance from the underlying hardware platform. This will help mitigate the [ray tracing] performance concern. [...] The latest Microsoft DXR 1.1 API was co-architected and co-developed by AMD and Microsoft to take full advantage of the full ray tracing architecture.

AMD co- architected and co developed for Sony or they did not. Just something to watch out for. Ray tracing is not performance free!

I think almost everyone here has an idea tht Ray tracing will affect performance to a great capacity even with apis mitigating. Both companies will have apis helping with that. Many comments such as "oh so if Ray tracing is on a game might be 1080p 30fps but with it off it might be 4k 60fps? Type comments I've heard around here.
 
Last edited:

ethomaz

Banned
Full quote
We also provide lower-level API support, that gives more control to the developers so that they can extract more performance from the underlying hardware platform. This will help mitigate the [ray tracing] performance concern. [...] The latest Microsoft DXR 1.1 API was co-architected and co-developed by AMD and Microsoft to take full advantage of the full ray tracing architecture.

AMD co- architected and co developedan API for Sony or they did not. Just something to watch out for. Ray tracing is not performance free!
I believe Sony already has his own API for RTX... that is why the Sony dev was asking if MS fixed the DXR/RDNA2 limitations with Ray-tracing.
 

Gamernyc78

Banned
I believe Sony already has his own API for RTX... that is why the Sony dev was asking if MS fixed the DXR/RDNA2 limitations with Ray-tracing.

I'm sure thy do thyve been working with Ray tracing on this console and testing it out for their games. I mean thy even had software based ray-tracing from like PS2 days lol
 
Last edited:

MarkMe2525

Member
Catch up again? Ps3 outsold Xbox 360 world wide almost every single month during its lifetime with less reputable online, online going down, inferior multiplats and being hundreds more. Nothing Xbox does will bring them close to catching up as long as Sony doesn't have a monumental fuck up.
To act as if Xbox didn't catch up to Sony in the 360 PS3 generation is just being disingenuous. That is also a mighty strong reaction to someone suggesting a path for MS to be competitive again.
 

Zero707

If I carry on trolling, report me.
They also said it was a Navi 10, where AMD confirmed today that both are using Navi 2x, so...
Consoles will use custom APU they will add what client tell AMD to add
right now we know oberon will have RDNA 2.0 Features, what we don't know if Cerny and his team were able to add more CU to later revisions
 
Last edited:

SlimySnake

Flashless at the Golden Globes
Confirmed.
i missed the good half of the conference. can you please tell me if this was explicitly stated? or was it more obfuscation like that amd engineer who said both consoles have native rt but didnt say if they were both rdna 2.0.
 

DeepEnigma

Gold Member
Consoles will use custom APU they will add what client tell to add
right now we know oberon will have RDNA 2.0 Features, what we don't know if Cerny and his were able to add more CU to later revisions

Did you not understand what I just said, or...

AMD confirmed both are using the latest Zen2 architecture, and both are using the newest RDNA2 Navi2x.

The GitHub was claimed to be Navi10 by _rogame himself. That rules that part of the data out.


Here's the _rogame tweet saying it is Navi 10, if anyone wants to see it
 
Last edited:

Old Empire.

Member
I think almost everyone here has an idea tht Ray tracing will affect performance to a great capacity even with apis mitigating. Both companies will have apis helping with that. Many comments such as "oh so if Ray tracing is on a game might be 1080p 30fps but with it off it might be 4k 60fps? Type comments I've heard around here.

What i am suggesting is. Microsoft felt they needed to work with AMD to co-author and develop a new API to obtain the best performance from the hardware ray tracing.

Sony doing similar work on its own, it very unlikely they’ll receive the same gains? This just speculation as we don’t know what Sony API development is for the hardware ray tracing. It’s interesting AMD and Microsoft co designed an API specifically for ray tracing for their console.
 

SlimySnake

Flashless at the Golden Globes
Seems like they have a ton of headroom to go wide and relatively fast
yep. i am pretty sure ive mentioned this before but i think both are going to get into a battle of clocks as we approach the console releases. MS might try and hit 13 tflops too.

sony likely knows this and are doing their best to hit high 13 tflops before the reveal. 14 tflops might be too hard.
 

Neo Blaster

Member
Holy shit dude where have you been the last 7+ years especially with the design with the PS4?They have an entire team of competent architects working on the damn thing.

They have their own in-house APIs, game engines, toolsets, and everything under the sun as any other platform.

Are you being purposely obtuse, or?
Didn't Osiris once say Sony tools are currently more advanced than MS'?
 
Status
Not open for further replies.
Top Bottom