• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

MadAnon

Member
Flute benchmark :



1. Note processor codename - 13F9 (Ariel). Same GPU ID as in Gonzalo.







2. Gonzalo codename follows Sony naming convention.

3. Oberon (final stepping?) contains Ariel refs

4. All 4, Gonzalo/Ariel/Oberon and Flute are part of Shakespeare inspired names along with confirmed PS5 codename...Prospero. All 3 are connected with Ariel ID.

5. Gonzalo FS score is 20K. This is unlikely if system is higher then 10TF, but also unlikely if its less then 9TF.

6. Flute specifies 16GB of memory. Only way to have it is to have 256bit bus, or 512bit bus. Also a way to have 16GB is to have 320bit bus with mixed chips.

Write speeds point to 256bit bus and 528GB/s of total BW.

This would point to a ~8-10TF chip in console, as Navi XT @9.5Tf without RT cores is 448GB/s and is a bit BW limited already.

This is analysis of what we know. No cryptic messages, only codename and bench dumps from AMD data miners.

My opinion based on this is that PS5 is 36CU system (40CU but with 4 deactivated), clocked at 2.0GHZ with 16GB of RAM (maybe additional 4GB as well od DDR4 and cut down Zen2.

There's one thing I'm a bit confused about though. The supposedly larger Arden chip is also from Shakespeare but it was claimed to be Xbox APU? Why would Xbox use the same naming scheme as PS5?
 

R600

Banned
There's one thing I'm a bit confused about though. The supposedly larger Arden chip is also from Shakespeare but it was claimed to be Xbox APU? Why would Xbox use the same naming scheme as PS5?
Because these are AMD codenames, not Sony/MS.

Similar thing happened last gen with Thebes, Liverpool, Thebe-j and other.

As per Komachi, Arden has completely different GPU ID from Oberon/Flute/Gonzalo.

Whats funny is that Komachi tweeted in Nov 2018 about first sample of PS5 chip, and its been quite consistent till now. Started low clocked, then revision to 1.8GHZ and finally Oberon with 2.0GHZ.

What also makes me wonder is May dev kit leak, which specified PCB and motherboard parts also said 16GB of GDDR6 RAM with 18Gbps chips from Samsung.

These chips are not yet in full production and would be ONLY ones to deliver >512GB/a on 256bit bus (16Gbps chips, currently fastest ones, only provide 512GB/s on that bus).

Interestingly enough, Flute benchmark shows 256bit bus and 528GB/s of BW (which only works with 18Gbps chips).

Many waved hands on that leaks back in May saying it was too early for APU based dev kits, but now we know V shaped APU based dev kits have been with developers since summer.

All in all, that rumour also pointed at 316mm² die size which would correlate with ~40CUs on die, albeit as per Oberon leak - highly clocked.
 
Last edited:

TeamGhobad

Banned
Because these are AMD codenames, not Sony/MS.

Similar thing happened last gen with Thebes, Liverpool, Thebe-j and other.

As per Komachi, Arden has completely different GPU ID from Oberon/Flute/Gonzalo.

Whats funny is that Komachi tweeted in Nov 2018 about first sample of PS5 chip, and its been quite consistent till now. Started low clocked, then revision to 1.8GHZ and finally Oberon with 2.0GHZ.

What also makes me wonder is May dev kit leak, which specified PCB and motherboard parts also said 16GB of GDDR6 RAM with 18Gbps chips from Samsung.

These chips are not yet in full production and would be ONLY ones to deliver >512GB/a on 256bit bus. Interestingly enough, Flute benchmark shows 256bit bus and 528GB/s of BW (which only works with 18Gbps chips).

Many waved hands on that leaks back in May saying it was too early for APU based dev kits, but now we know V shaped APU based dev kits have been with developers since summer.

All in all, that rumour also pointed at 316mm² die size which would correlate with ~40CUs on die, albeit as per Oberon leak - highly clocked.

and MS has about 350mm^2 but is still less powerful, compute64?
 

Blizzje

Member
Because these are AMD codenames, not Sony/MS.

Similar thing happened last gen with Thebes, Liverpool, Thebe-j and other.

As per Komachi, Arden has completely different GPU ID from Oberon/Flute/Gonzalo.

Whats funny is that Komachi tweeted in Nov 2018 about first sample of PS5 chip, and its been quite consistent till now. Started low clocked, then revision to 1.8GHZ and finally Oberon with 2.0GHZ.

What also makes me wonder is May dev kit leak, which specified PCB and motherboard parts also said 16GB of GDDR6 RAM with 18Gbps chips from Samsung.

These chips are not yet in full production and would be ONLY ones to deliver >512GB/a on 256bit bus (16Gbps chips, currently fastest ones, only provide 512GB/s on that bus).

Interestingly enough, Flute benchmark shows 256bit bus and 528GB/s of BW (which only works with 18Gbps chips).

Many waved hands on that leaks back in May saying it was too early for APU based dev kits, but now we know V shaped APU based dev kits have been with developers since summer.

All in all, that rumour also pointed at 316mm² die size which would correlate with ~40CUs on die, albeit as per Oberon leak - highly clocked.

40CU's on die with 4 deactived?
 

R600

Banned
and MS has about 350mm^2 but is still less powerful, compute64?
If MS went with 320bit bus, its additional 16mm² on die vs 256bit bus.

So MS can have 4CUs more, 320bit bus as well and yet still be less powerful. How? Lower clocks + lower clocked memory chips on wider bus.

Something like this :

PS5

36CUs 2.0GHZ - 9.2TF
256bit bus 18Gbps chips - max 576GB/s (can and probably would be downclocked as these are FAST and hot)

Anaconda

40CUs 1.8GHZ - 9.2TF
320bit bus 14Gbps/16Gbps chips (560GB/s - 640GB/s).

So could be very hairy if Anaconda is ~30mm² bigger but has wider bus and 4CUs more as well, but also lower clocks.

Obviously, if they went with 320bit bus and more CUs they have more headroom to react. Go with faster chips, thus upping BW of system and also clock CUs higher, while Sony cant do it if they already go with fastest memory chips on narrower bus.
 

MadAnon

Member
If MS went with 320bit bus, its additional 16mm² on die vs 256bit bus.

So MS can have 4CUs more, 320bit bus as well and yet still be less powerful. How? Lower clocks + lower clocked memory chips on wider bus.

Something like this :

PS5

36CUs 2.0GHZ - 9.2TF
256bit bus 18Gbps chips - max 576GB/s (can and probably would be downclocked as these are FAST and hot)

Anaconda

40CUs 1.8GHZ - 9.2TF
320bit bus 14Gbps/16Gbps chips (560GB/s - 640GB/s).

So could be very hairy if Anaconda is ~30mm² bigger but has wider bus and 4CUs more as well, but also lower clocks.

Obviously, if they went with 320bit bus and more CUs they have more headroom to react. Go with faster chips, thus upping BW of system and also clock CUs higher, while Sony cant do it if they already go with fastest memory chips on narrower bus.
It could be more than 350mm. It was 300mm Oberon and 350mm Arden. But as I understand those are not exact numbers but rather rough estimates? E3 shot of Xbox APU suggested more than 350mm. I think Xbox is legit in 52CU teritorry. With 1.8GHz they can aim 12TF but I don't think they will reach it. Probably will be clocked lower around 1.7GHz.
 
Last edited:

R600

Banned
but they said 12tflops..
Who? If that leak does not specify 12TF Navi chip, I dont believe it.

Read so much stuff about Durango and Orbis back in 2012 (from reliable jurnos) that turned to be nothing but hot air.

All I know is, there are confirmed chip codenames from Sony and MS.

From Sony, we know CPU clocks, cache and memory arrangement. We also know TS score so with BW and memory arrangement, as well as clock speed (Oberon - 2.0GHZ) we CAN get realistic TF count.

From MS we have AMD codename and GPU ID (1607). We also have 320bit bus confirmation and rough die size calculation of about ~350-360mm².

I work on these very logical and very well sourced facts. What jurno nr1 says on some forum I am not gonna take into consideration.

We had klee saying games will look amazing for number of TF that are there in consoles (therefore hinting on a bit less TF then expected), to go for "both will be 10TF +" and "lockhart is dead people".

Basically, very vague and very hard to verify. Could even be a case of old jurno who wants to get some fame on well known forums by making stuff up (he does seem to be succeding).
 

R600

Banned
It could be more than 350mm. It was 300mm Oberon and 350mm Arden. But as I understand those are not exact numbers but rather rough estimates? E3 shot of Xbox APU suggested more than 350mm. I think Xbox is legit in 52CU teritorry. With 1.8GHz they can aim 12TF but I don't think they will reach it.
I believe PCB leak from May was correct and PS5 is 316mm².



I also think they will be shooting for 40CUs (36 active) because of BC (which was confirmed with clocks from Oberon leak) as well as die size (no room for more if 316mm²).

For MS its hard to tell but if PS5 is 316mm² with 256bit bus, then for same amount of CUs you will need ~332mm² if using 320bit bus. With a.i cores and RT I dont think you can fit in 52CUs in 350mm². Maybe, maybe in 360mm² but thats a stretch as well since some will have to be deactivated.
 
I'm Just gonna leave this hear:

PSM: 13/02/2.0
Sounds like 13TF, 2 GHz, RDNA 2.0 and it will be revealed in Feb 13th 2020, right?

Anyway, 13TF is closer to 14TF than lowballed single-digit TF, so I'll take it! ;)

LOL, I don't understand why everyone is sooo surprised to hear that Scarlet (and likely PS5 as well) will be 12 TFLOPs. There have been no less than a dozen "leaks" and rumors (mostly for PS5) all pointing to the GPU being "double digit" TFLOPs with most people saying 12-14TFLOPs for the better part of the last year. These "new" Scarlet specs match what was leaked back in Feb 2019 indicating 12+ TFLOPs on the GPU and PS5 leaks have been pointing to the GPU being 12-14 TFLOPs since last winter. I've been saying it for months repeatedly that people are underestimating next gen specs and supporting my claims with sound logic and some facts. Just because you can't see how they can make that happen, doesn't mean they won't. When you have giant corporations like Sony, Microsoft, and AMD investing 9 figure $ amounts for 4-5 years of development a custom chip solely for the application of this gaming console, I would trust that they would make some amazing things happen :)

More specifically to the skeptics talking about TDP and power, consider this: how many people actual remember that there are 3 variants of the RTX 2080 GPU from Nvidia? Right, let's recap:

GPUTDP (W)Approx PowerClock Speed (base | boost)
RTX 2080 (Desktop)215+15% over 5700 XT | 7-8% > 1080ti1515 | 1710
RTX 2080 (Laptop)1502-3% faster than GTX 1080ti1380 | 1590
RTX 2080 (Max-Q)80-90~GTX 1080 | 5-10% less than a 5700 (non XT)735-990 | 1095-1230
RX 5700 XT (Desktop)2257% less than GTX 1080Ti | slightly faster than RTX 20701605 | 1905

So what do we take from this:
  1. Nvidia is able to take a full RTX 2080 GPU (2944 shading units) and scale it from 215W down to 80W for ultra slim laptops. 80-100W is an ideal power envelope for a console GPU.
  2. Nvidia is able to get 1080ti level performance in < 150W on 12nm process! AMD has the advantage of being on a 7nm process which is roughly half the size per transistor. Do you really think it's not feasible for AMD to achieve 1080Ti level perf at around 100W TDP on 7nm if Nvidia is already doing it at <150W on 12nm?
  3. Console GPU are not the same as desktop GPUs! In fact, they are closer to the GPUs you find in laptops (but still customized from those). Remember AMD also has the advantage with these consoles being a fixed box with a singular cooling solution that they can design for. With PC desktop, every user can have a different system cooling solution making it harder to design around. Also console GPUs do not look like the massive external boards used for the desktop GPU as they are integrated directly into the APU and draw their power from the system. In other words, their size and heat output is much less than their desktop counterparts.
As I've been saying for a while now, it is absolutely possible for AMD to develop a GPU in the 12 TFLOP range for a console by end of 2020. The next gen console GPUs will also be based on silicon that we have not seen yet and won't be released in any form until mid 2020 (RX 5800 series). They will be even more efficient than current gen Navi and support a wider range of power skus. Trust me, next gen will be a big increase across the board!
We need more informative posts like this one and less system wars/agenda-driven BS.

good lord, some of the backpeddling here is something else. just admit you were wrong and 12 tflops wasnt as impossible as you said.

no idea why men just cant man up and admit they made a mistake. its like the easiest thing to do.

i am getting too old for this bullshit.
Are you talking about the same guy who couldn't accept the possibility of 7nm EUV, despite all the benefits (less lithography masks, more performance, less power) it offers?

There's no other way they could hit 2 GHz on a monolithic APU (unless they go chiplet/discrete GPU, which is not very likely). The signs are all there!

If you all really believe that the FF2400 is More powerful than 0000FF just give it a minute. They are BOTH > 10 with 0000FF having a slight advantage. Neither machine is sitting at 10TF. Lets see when others start talking about the Ray Tracing differences and what 0000FF has planned for extra storage. I'll go back and read everything back to page 299. Diana is not burned so I will be around. Also Phil does not have a dev kit. I'm going to ask if I can be very specific and I will post either way. Everything's about to drop the damn has been bombarded by Thanos.
2TB QLC NAND confirmed?

Hmmm, good point...maybe the APU will handle the physics portion? 🤷‍♂️

I am basically just torn on the split cooling on the dev kits, what else would need that much cooling on the other side of the mainboard?! I mean the RAM you want to be to be as close to the APU as possible so that wouldn't be on the other side like that, so my spidey sense is saying that thing has a discrete GPU in it. I know I am probably waaaay overthinking the whole situation and no one will know for sure until 2020, unless someone rips open a dev kit and posts it online!
There's this crazy rumor from 2017, although I wouldn't bet on it:

those 8 cores were actually as "powerful" as a 2C/2T Celeron
Not true:
It's not so much optimizing more than the fact that dx12 is finally making use of the 8 cores in the xbox one. Also, the xbone shadows are lower than the pc settings on low. Basically you can't match them. Digital foundry also ran the tests with tesselation on while the xbone only has it apply to snow.

But that's not the main reason it went so low.



Look around 40 seconds in

The 950 went down to 15 fps with an i3 and got around 40-45 fps with an i5. The only reason that test did so bad was because of that i3 dual core cpu. Though dx12 aside, perhaps the game just makes better use of cores where the i3 is limited in only having 2.

The xbone is also using dx12 features/async compute to make more uses of it's 8 cores which the pc does not have available yet due to amd and nvidia not releasing drivers for their cards and devs not activating/implementing dx12 in games yet. Tomb raider has a hidden dx12 option available on pc but it's not working yet.

Why? Likely because they need the drivers to function on dozens of video cards vs just 1 and two nvidia/amd likely want to reveal the drivers/functionality for when their full dx12 cards are out.

In any case, the fact that a quad core increases performance by as much as 3 times on a 950 is telling enough. Dx12 or not that was the limiting factor here. But let's get real an i3 should not be considered for a gaming pc as the cpu is the base of the computer. It's the part you will never have to upgrade for years if you get anything better than dual core unlike video cards which might need a refresh every 3-5 years. For reference I'm still using my i7 920 8 core cpu from 2009 and it is doing just fine in 1080p.



8-core Jaguar is better than i3 (2C/4T), but worse than i5/i7.

So you're basically saying that AMD purposely let's NV be the undisputed king on the GPU market? Because by your logic/math, all they should do is to just downclock the 5700 GPUs (that already run at moderate 1,7GHz) and add CUs (well it's AMD after all, so the "add more cores!" strategy would fit perfectly here ;D), but yet somehow... they don't! And we are talking about PCs here right now, where they are not so limited by space, power usage, thermals etc.
AMD is busy juggling lots of projects (CPUs, GPUs, APUs) and it's a far smaller company (in terms of market cap) compared to Intel (CPU-focused company) or nVidia (GPU-focused company).

The problem is that AMD's architectures don't scale well, if any, after a certain point - we have a 3rd iteration of Zen architecture, 2nd die shrink (14>12>7nm), and still ~4GHz is the wall. And the frequency is what drives IPC, you know, the Instructions Per CLOCK - it's like torque and HP in cars, you can talk about either of the two, but it's the combination of both that makes all the final performance. And same thing applies for their GPUs - there seems to be a wall between the frequency and CU count, which AMD's GPUs seem to be unable tu surpass, they can wither put a lot of CUs but at low clock, or low CUs at high clock, or some sort of a middle ground, but the wall always appear at the same spot, which Fury X already hit in mid 2015.
First of all, you confuse IPC with single-threaded performance. Rookie mistake.

ST = IPC * clock

Second, Zen 2 scales far better than Intel's Coffeelake uarch. Zen 2 can go all the way up to 4.7 GHz, all while consuming less power than Intel equivalent CPUs. 7nm EUV will allow them to hit 5 GHz on binned chips (4950X).

Regarding GPU clocks, you are right about Polaris, but not Navi. Navi can go up to 2+ GHz, just like nVidia.

BUT - all that being said, and what we all seem to forget, is that Navi, a.k.a. RDNA1 is STILL a GCN-based architecture - a greatly improved and optimized, sure, but still a GCN at its core. So maybe that's it - the next-gen consoles will be based on RDNA2, which is suppose to be AMD's truly new GPU architecture, a Ryzen equivalent for the GPU market, which IMO is the only logical explanation of such a high rumored TF numbers? Yeah, I think that's what I'm going to bet on from now on. Especially that RDNA2 is also suppose to have RT support, it all just fits too good.
RDNA2 uarch will still conform to the GCN ISA (due to BC/Mantle reasons).

Just like nVidia hasn't changed their RISC ISA since 2006 (G80). There's a reason CUDA offers BC since then.

x86 CPUs are the same. Coffeelake/Zen 2 still conform to the x86 ISA that IBM PC had back in 1981, all the way to BIOS BC (UEFI CSM).

The big difference between CPUs and GPUs is how they handle BC. CPUs use a microcode layer that translates CISC macro-ops to RISC-style micro-ops, while GPUs use HAL/drivers (even if they're thin ones like DX12/Vulkan).
 

Mega Man

Member
So now that they are reporting the next Xbox will be named based on it's purpose, I feel like we can begin narrowing down the possibilities...

“Our naming convention has been around what we think the capabilities are,” Spencer added

Based on the SSD:
Xbox Lightning
Xbox Quick
Xbox Flash

Based on Streaming:
Xbox Cloud
Cloud Box
Xbox Air

Based on Power:
Xbox 12 (for teraflops)
Xbox Tera
Xbox Hammer

...Some solid options for sure
 
There's this crazy rumor from 2017, although I wouldn't bet on it:

Lol, I actually forgot about that rumor, but trust me, I highly doubt they do a discrete GPU as well, I was only entertaining the possibility because of the way they have the cooling setup on the dev kit, with the venting being split like that, as if there was something running hot on each side of the board. It will be interesting when they do release the final specs though, I am really keeping my fingers crossed for a 12-13TF machine with a MINIMUM of 16 GB RAM. I have 100% faith in Cerny, he knows better than pretty much anyone what the devs need and I know he will deliver the best machine for the price when it comes out.
 

Gamernyc78

Banned
Ps5 more powerful than nextbox and will be 11/12 TF and above will be vindicated. Alot of butt holes will be hurt (pause). This is going to be glorious and exciting for gamers as both seem to be packing a punch.
 
Last edited:

ANIMAL1975

Member
I believe PCB leak from May was correct and PS5 is 316mm².



I also think they will be shooting for 40CUs (36 active) because of BC (which was confirmed with clocks from Oberon leak) as well as die size (no room for more if 316mm²).

For MS its hard to tell but if PS5 is 316mm² with 256bit bus, then for same amount of CUs you will need ~332mm² if using 320bit bus. With a.i cores and RT I dont think you can fit in 52CUs in 350mm². Maybe, maybe in 360mm² but thats a stretch as well since some will have to be deactivated.

What about the implications of the "16 Samsung K4ZAF325BM-HC18 in clamshell configuration" in the die size _ are you considering it in your 'amount of CUs that fits' calculations?
And you saying it has to be downclocked for heat purposes, when people have said before, that the clamshell configuration, is exactly for better cooling and heat dissipation...
 

sinnergy

Member
Slow and wide is the best option , if you want to increase as last minute option, heat is much more manageable. Also up the BW so it’s not starved , I bet MS will go that route.
 

SlimySnake

Flashless at the Golden Globes
So now that they are reporting the next Xbox will be named based on it's purpose, I feel like we can begin narrowing down the possibilities...

“Our naming convention has been around what we think the capabilities are,” Spencer added

Based on the SSD:
Xbox Lightning
Xbox Quick
Xbox Flash

Based on Streaming:
Xbox Cloud
Cloud Box
Xbox Air

Based on Power:
Xbox 12 (for teraflops)
Xbox Tera
Xbox Hammer

...Some solid options for sure
Xbox Pass
 

R600

Banned
What about the implications of the "16 Samsung K4ZAF325BM-HC18 in clamshell configuration" in the die size _ are you considering it in your 'amount of CUs that fits' calculations?
And you saying it has to be downclocked for heat purposes, when people have said before, that the clamshell configuration, is exactly for better cooling and heat dissipation...
The implication of that are very clear.

According to that leak, are 16 chips of 18Gbps in devkits. Which means bus width can only be 256bit or 512bit. Since 512bit is way, way too high, we can conclude safely its 256bit. If they went with 512bit bus, they would need 9Gbps chips to achieve same BW, so why 18Gbps? Thats 1152GB/s - insanity.

So, it cant be anything bar these two. 512bit bus would be GIGANTIC (basically additional 64mm² on die v 256bit). And to top it all of, thats the part of die that is hardest to bring down with smaller nodes. Therefore, revisions of chip would not be as small as required, resulting in higher prices throughout the gen. So your best bet is to get that bus as narrow as possible if you are concerned about future cost reductions.

18Gbps would probably be chosen because 256bit bus is, well not narrow, but not exactly wide. But 18Gbps chips on 256bit bus is still 576GB/s, which is alot. If they went with 16Gbps (current Nvidia super cards) that would result in 512GB/s which I think is touch to little.

I assume, like everytime consoles where relelased, they wont run full clocks on these chips, but even 17Gbps would still be 544GB/. So plenty for 8 core Zen2 and 8-10TF GPU.
 

R600

Banned
Slow and wide is the best option , if you want to increase as last minute option, heat is much more manageable. Also up the BW so it’s not starved , I bet MS will go that route.
Not exactly.

1. GCN and Navi scale worse as more CUs are added v clock uplift
2. Wide (or extra wide) bus will result in bigger die, resulting in worse yields and in the end big hit on pocket
3. To make things worse bus is hardest thing to get down with node revisions, so you do not want to have too wide bus and slow memory because you will not be able to bring that chip down in size - resulting in hit on pockets and slower price reduction

All in all, there was a reason why X went with 384. Because they never thought about reducing size of that chip, as it costs quite a bit to redesign it on smaller node. Now, for console that you will sell 100+ million that is a thing to keep in mind though...
 

ANIMAL1975

Member
The implication of that are very clear.

According to that leak, are 16 chips of 18Gbps in devkits. Which means bus width can only be 256bit or 512bit. Since 512bit is way, way too high, we can conclude safely its 256bit. If they went with 512bit bus, they would need 9Gbps chips to achieve same BW, so why 18Gbps? Thats 1152GB/s - insanity.

So, it cant be anything bar these two. 512bit bus would be GIGANTIC (basically additional 64mm² on die v 256bit). And to top it all of, thats the part of die that is hardest to bring down with smaller nodes. Therefore, revisions of chip would not be as small as required, resulting in higher prices throughout the gen. So your best bet is to get that bus as narrow as possible if you are concerned about future cost reductions.

18Gbps would probably be chosen because 256bit bus is, well not narrow, but not exactly wide. But 18Gbps chips on 256bit bus is still 576GB/s, which is alot. If they went with 16Gbps (current Nvidia super cards) that would result in 512GB/s which I think is touch to little.

I assume, like everytime consoles where relelased, they wont run full clocks on these chips, but even 17Gbps would still be 544GB/. So plenty for 8 core Zen2 and 8-10TF GPU.
You didn't understand my point, if those chips are in a clamshell configuration, they are equally divided on the front and back of the die, leaving half of the space they would occupy free for more CUs, or im missing something?

Fake edit: And in the same line of thinking, the better heat dissipation and cooling of the chips in clamshell (by Sonys cooling patent find) also leaves more space to work with the total system wattage.
 

R600

Banned
You didn't understand my point, if those chips are in a clamshell configuration, they are equally divided on the front and back of the die, leaving half of the space they would occupy free for more CUs, or im missing something?

Fake edit: And in the same line of thinking, the better heat dissipation and cooling of the chips in clamshell (by Sonys cooling patent find) also leaves more space to work with the total system wattage.
Yes, space on die will be exaclty the same in clamshell mode as in regular mode. Why did this leaker provide such info is also a question.

Its funny that he posted this on 20th of May, but when he posted it clearly pointed at Asian time zone. Perhaps someone from manufacturing.
 
If you all really believe that the FF2400 is More powerful than 0000FF just give it a minute. They are BOTH > 10 with 0000FF having a slight advantage. Neither machine is sitting at 10TF. Lets see when others start talking about the Ray Tracing differences and what 0000FF has planned for extra storage. I'll go back and read everything back to page 299. Diana is not burned so I will be around. Also Phil does not have a dev kit. I'm going to ask if I can be very specific and I will post either way. Everything's about to drop the damn has been bombarded by Thanos.
Oh shit. How i missed this post.
 

LordOfChaos

Member
Hmm... are you referring to wired article from August?
Interesting they note using RT for audio. AMD had that since 2013

Anyhow, quotes by studios included: "I could be really specific and talk about experimenting with ambient occlusion techniques, or the examination of ray-traced shadows," says Laura Miele. "More generally, we’re seeing the GPU be able to power machine learning for all sorts of really interesting advancements in the gameplay and other tools."


About Mister Xmedia twisting that around again and saying it could only ray trace shadows or something, after being flat wrong about it lacking hardware RT.

Point being, he's full of shit lol
 
Last edited:

Gamernyc78

Banned
About Mister Xmedia twisting that around again and saying it could only ray trace shadows or something, after being flat wrong about it lacking hardware RT.

Point being, he's full of shit lol

We've known him to be full of shit since 360 days what's new? Tht dude is a bona-fide Xbox schill tht we know makes up shit to try to give Xbox an advantage when the opposite is reality. If I was a die hard Xbox fanatic the last person I would believe is him given his track record of lies and being right 0%.
 

HeisenbergFX4

Gold Member
If you all really believe that the FF2400 is More powerful than 0000FF just give it a minute. They are BOTH > 10 with 0000FF having a slight advantage. Neither machine is sitting at 10TF. Lets see when others start talking about the Ray Tracing differences and what 0000FF has planned for extra storage. I'll go back and read everything back to page 299. Diana is not burned so I will be around. Also Phil does not have a dev kit. I'm going to ask if I can be very specific and I will post either way. Everything's about to drop the damn has been bombarded by Thanos.

Phil has takehome which is way ahead of schedule. When Phil got his takehome w/ X, it was way later than he has now. Stop lying please.

I read this differently as possibly Osiris saying Phil in fact has more of a finished product and not just a DevKit.

Plus is he hinting at a $299 price?

Yeah I am just that hungry for info :)
 

R600

Banned
Imagine insiders knowing price points, TFs, system bandwidth, dates of announcements, launch games...huh. and all of that for BOTH consoles!
 

Gamernyc78

Banned
Imagine insiders knowing price points, TFs, system bandwidth, dates of announcements, launch games...huh. and all of that for BOTH consoles!

Imagine how insiders laid out the ps4 specs and price point months to a year before release and thy were actually right!!!

Lol as if there's no history of this 🤦‍♂️🤦‍♂️🧂😊

I know the reality of what might be sometimes hurts.
 
Last edited:

Dlacy13g

Member
If you think about it, outside of "consoles", when is the last time you saw a tech product from a major tech company launch with just 1 sku? Apple iPhone, Macbook, etc..., Samsung TV's, Lenovo laptops, LG TV's, Sony TV's, Google Pixel,.... heck even this applies to appliances, cars, etc.. EVERY major manufacturer almost always has a "line" of products that releases with core feature sets that are new and exciting and then from there additional tech/features/hardware that will differentiate and justify some higher price point models.

From what I understand Anaconda and Lockhart will share 2 core main hardware features... CPU and SSD Type. The variables of disc drive, GPU, RAM and possibly the size of the SSD will be the main differentiators. Seems to me this is pretty much the model of Apple today.... If they can actually put out a next gen console that runs next gen games at up to 1440p or very easily at 1080p at $299 that seems like an attractive console imo. As much as targeting 4K is all the rage in the next gen talk. The vast majority of the world is not on 4k monitors or TV's. Also the current gen is not held back by visual fidelity...frankly visually current gen looks great but its limited in "how much" it can do by the poor CPU's.
 

THE:MILKMAN

Member
Imagine how insiders laid out the ps4 specs and price point months to a year before release and thy were actually right!!!

Lol as if there's no history of this 🤦‍♂️🤦‍♂️🧂😊

I know the reality of what might be sometimes hurts.

IMO 99.9% of the PS4/Xbox One leaks came directly from the SuperDaE hack. The journalists started to drip feed this info given to them but the likes of VGLeaks blew it open.

That's my take.....
 

R600

Banned
Imagine how insiders laid out the ps4 specs and price point months to a year before release and thy were actually right!!!

Lol as if there's no history of this 🤦‍♂️🤦‍♂️🧂😊

I know the reality of what might be sometimes hurts.
Except they didnt. There is not a single person, bar perhaps CEO of Sony/PS that knows each of these things. And even then, he probably doesnt have all info on competitors console.

Thats how you know someone is full of shit. How do you know TF, BW, announcement dates, launch titles and price point of BOTH systems? How do you know stuff like GTAVI development? How high up you have to be to know that?

First leaks about PS4 came from Sveetvar on Neogaf in 2012. This guy had friend who worked for AMD and knew SOME info, but it was clear he knew nothing about details, launc titles, price points etc.

Dont you ask yourself how someone can know all of that? What source you have to have to know it?
 

Marlenus

Member
So now that they are reporting the next Xbox will be named based on it's purpose, I feel like we can begin narrowing down the possibilities...

“Our naming convention has been around what we think the capabilities are,” Spencer added

Based on the SSD:
Xbox Lightning
Xbox Quick
Xbox Flash

Based on Streaming:
Xbox Cloud
Cloud Box
Xbox Air

Based on Power:
Xbox 12 (for teraflops)
Xbox Tera
Xbox Hammer

...Some solid options for sure

Xbox 4k for desired target resolution.

Xbox Zen because it is cool and quiet. Also links with the CPU being Zen.

Xbox^3 because it is a cube.
 

Dlacy13g

Member
Xbox 4k for desired target resolution.

Xbox Zen because it is cool and quiet. Also links with the CPU being Zen.

Xbox^3 because it is a cube.
I actually like that Xbox Zen name....

I would also toss in the name Xbox 4X (possibly Xbox One 4X) ...as in 4 times as powerful as the Xbox One X.
 

Mega Man

Member
Xbox 4k for desired target resolution.

Xbox Zen because it is cool and quiet. Also links with the CPU being Zen.

Xbox^3 because it is a cube.
Xbox Zen is pretty dope! Maybe with a proprietary Buddha peripheral bundled in for an extra $50?! Rub his belly to turn on and off the box.

I donno, maybe I'm overthinking it a little...
 

MadAnon

Member
IMO 99.9% of the PS4/Xbox One leaks came directly from the SuperDaE hack. The journalists started to drip feed this info given to them but the likes of VGLeaks blew it open.

That's my take.....
This. The big name reporters, "insiders" got nothing right this far before release of current gen consoles. People again falling for every "my sources tell me" guy.

Like reporters didn't have ton of dev friends working with devkits back then? Some unexpected source basically blew the whole thing wide open.
 
Last edited:

Gamernyc78

Banned
It's funny how the ppl tht keep insisting no leaker has gotten leaks right in the past are from the same camp. Yet and still thy have.

Give us tht juicy official info so I can once again (like I've done in the past) been like "I told u so". Like I did when a certain game was downgraded a Megatron and many in here were ignoring the leaks, preview comments, etc... All the signs pointing in tht direction.
 
Last edited:
Status
Not open for further replies.
Top Bottom