• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

VGLeaks Durango specs: x64 8-core CPU @1.6GHz, 8GB DDR3 + 32MB ESRAM, 50GB 6x BD...

Karak

Member
Regarding HDMI-IN...it would be awesome if we could use Durango as a high-quality upscaling device :)
I have been crowing about that for a couple weeks now. Hdmi in...I am stoked to see the uses BIG time.

This is a big difference.

I mean, sampling from eSRAM. Interesting.
I know. Some of the devs seemed to think that was really cool. I have/had no real idea why:)
 

Desty

Banned
32mb is enough to do 1080p with 4xmsaa, in fact its like the exact amount needed if I added things up right. But lets be real here the days of msaa are gone, FXAA is where all dev's are heading as its way cheaper and most people cant tell the difference (yes all of us can, but FXAA is improving all the time, and more than good enough for most).

Oops, yes my math was wrong. Yay, the days of splitting the screen up are over forever (thank goodness).
 

McHuj

Member
Simply "taking stuff out" wouldn't make it more efficient per FLOP though, at best it would maintain the same efficiency for graphics while reducing the die size.

Also, considering the CPUs in these consoles I'd have thought that maintaining GPGPU performance would be a rather important goal.

I'm not disagreeing with your first point. As to the second, I think stuff for double precision isn't needed for GPGPU compute and can be easily take out. If you look at Kepler vs GCN, GCN has a bigger die size and much better double precision compute capability, but in terms of gaming they're very comparable.
 
Proelite:-
ROPS are not in esram this time around.

Ok that would explain it but...

Durante:-
But that would still make the combined BW of both pools inferior to the BW of Orbis' single pool. Which just seems a bit weak in comparison. And needlessly limited for ESram.

Agreed - why bother with ESRAM (which is costly, and has compromises with tiling etc) if the bandwidth is lower than GDDR5? Seems to defeat the purpose!

I suppose maybe it was the only way they could get the 8GB - someone did say it appears to be a memory driven design.

Colour me non-plussed so far - on speculative paper at least I think Orbis has the edge, but of course we'll just have to see.
 
Let me see if this helps. For Durango:

Rendering into ESRAM: Yes.
Rendering into DRAM: Yes.
Texturing from ESRAM: Yes.
Texturing from DRAM: Yes.
Resolving into ESRAM: Yes.
Resolving into DRAM: Yes.

For the 360, that would be yes, no, no, yes, no, yes.

That's awesome :p

That block diagram also indicates the cpu could access the esram too, which is something 360 could not do... Are they still hoping for procedural generation (akin to 360's shared L2 cache between the cpu ang gpu that seemed to go very underutilized?)
 

quest

Not Banned from OT
For those thinking that the GPUs will be fundamentally different; if AMD came up with a GPU architecture that's more efficient than GCN, then they'd be using it in their PC graphics cards. Sony and MS both want the most advanced AMD graphics tech available, and in both cases that's GCN. There might be small tweaks here and there, but the GPUs will be fundamentally the same, but with ~30% more power on PS4's part.

Unless it is something MS has a patent on and can't use it in graphic cards. Or it is not ready for PC cards yet like Xenos was unified shaders first then PC market. Or it sucks away a lot of GPGPU performance that AMD is pushing on the PC side. I am skeptical about this secret sause just as much as anyone. Lets say it is true and it works but GPGPU performance is hurt a ton. Where does next gen physics come from? It won't be on a couple of jaguar CPUs. Even Sony has a extra compute core for those kind of things.
 

gofreak

GAF's Bob Woodward
Ha. I mean, if you'd like to clarify what that means, by all means, share.

It means a dev could theoretically put some frequently accessed (small) assets into eSRAM and read from them there rather than from DDR3. You couldn't do this on Xenos.

It also means you could keep a buffer or two on eSRAM for sampling in subsequent render passes rather than copying out to DDR3 and waiting.

But there's compromises here too. It's really interesting that they did this.
 

i-Lo

Member
The original 6670 next gen console rumor last year was accurate, and it was Orbis. That's changed. Microsoft had longer to work on their final design. It's more custom.

The comparisons to the PS3 aren't necessarily inaccurate, I think. Durango's architecture is more exotic than Orbis by all indications. I just think, based on Microsoft's dev history and general developer sentiment, that Microsoft has Durango's development environment more on lockdown than Sony had with the PS3.

So many words for the simple message: Sony were LTTP and comprise of comparatively incompetent engineers as opposed to MS'. Ergo, the specs of Durango may "look" less capable than Orbis' but is in fact not.
 

Durante

Member
This is a big difference.

I mean, sampling from eSRAM. Interesting.
True, it's far more useful, but this (more flexible embedded memory access) has been rumoured for a long time.

If the bandwidth numbers in the OP are true (which I still doubt) then it's still less BW than Orbis, while introducing the complexity of dealing with 2 memory pools. I don't think that's a good trade-off.


Specs we can compare between all three systems.

RAM amount. 720 = 2X PS4 = 2x Wii U
RAM bandwidth. PS4 >>>> 720 ( a mover wont help too much since a certain amount of ESRAM will always be taken up by frame data ) >>> Wii U.

GPU. Ps4. = 1.5x 720 = 2-3x Wii U.
That assumes quite a bit more GPU power than I'd expect.

Also, it disregards the CPU. Which looks to be
PS4 = 720 >>>>>>> Wii U
at this point.


Oops, yes my math was wrong. Yay, the days of splitting the screen up are over forever (thank goodness).
No, you were right. You can't do 1920x1080 with 4xMSAA in 32 MB without tiling. Unless you enjoy splitting and sorting all your polygons before rendering ;)
 

aegies

Member
I have been crowing about that for a couple weeks now. Hdmi in...I am stoked to see the uses BIG time.


I know. Some of the devs seemed to think that was really cool. I have/had no real idea why:)

I wonder if Microsoft will tout Durango as a 4K intermediary solution? Pipe content through it for robust upscaling or something?
 

derFeef

Member
It means a dev could theoretically put some frequently accessed (small) assets into eSRAM and read from them there rather than from DDR3. You couldn't do this on Xenos.

It also means you could keep a buffer or two on eSRAM for sampling in subsequent render passes rather than copying out to DDR3 and waiting.

But there's compromises here too. It's really interesting that they did this.

32mb seems to indicate a rather limited use for such things, though.
 

EvB

Member
Is there any way that they could use HDMi in to allow some kind of backwards compatibility bridge, So that you can actually playing 360 games within the new interface , with the new dash and new in game overlays?

The 360 could be updated to never show any 360 GUI elements and the machines could be linked in some way to pass across input between the machines.
 

i-Lo

Member
I have been crowing about that for a couple weeks now. Hdmi in...I am stoked to see the uses BIG time.


I know. Some of the devs seemed to think that was really cool. I have/had no real idea why:)

Are you insinuating that XB3 is some sort of receiver?
 
True, it's far more useful, but this (more flexible embedded memory access) has been rumoured for a long time.

If the bandwidth numbers in the OP are true (which I still doubt) then it's still less BW than Orbis, while introducing the complexity of dealing with 2 memory pools. I don't think that's a good trade-off.


That assumes quite a bit more GPU power than I'd expect.

Also, it disregards the CPU. Which looks to be
PS4 = 720 >>>>>>> Wii U
at this point.

Wii U's GPU is roughly in the 340-540 GFLOPs range.
 
For those thinking that the GPUs will be fundamentally different; if AMD came up with a GPU architecture that's more efficient than GCN, then they'd be using it in their PC graphics cards. Sony and MS both want the most advanced AMD graphics tech available, and in both cases that's GCN. There might be small tweaks here and there, but the GPUs will be fundamentally the same, but with ~30% more power on PS4's part.

Not if that efficiency comes at cost of the capabilities of running non graphical code, and that's an area where both AMD and Nvidia seem to be pushing their gpus, so they expand their market past consumer enthusiastic game performance...
 

aegies

Member
So many words for the simple message: Sony were LTTP and comprise of comparatively incompetent engineers as opposed to MS'. Ergo, the specs of Durango may "look" less capable than Orbis' but is in fact not.

I haven't put "incompetent" anywhere. Consoles are products of compromise. Each platform holder has to look at the environment and do what they think is best. All the smarts in the world can only help so much against specific environmental concerns.

There may be a few feature differences here and there, but the GCN architecture is still fairly new for AMD, and any major overhaul is too far away to make it into consoles this year. In any case, I would still be very surprised if AMD on the one hand sold the existing architecture to Sony, and on the other hand sold something much more efficient to MS. There'll be differences based on differing hardware sensibilities between MS and Sony, and the amount of time each of them put into their final hardware, but we've got two consoles coming out at the same time with GPUs from the same company, so we shouldn't get ahead of ourselves in terms of potential differences between the two.

Consoles are a place for experimentation. The 360 had features from ATi cards that took another 18 months to reach consumers.
 
Orbis:
tumblr_mgvft79UQo1rlccoho1_400.gif


Durango
anascar3.gif


WiiU
148.gif
 

Thraktor

Member
How do you know they're not going to? Whatever tech AMD came up with can probably filter into the PC GPU's as well. Consoles just could be the first product introducing these features.

There may be a few feature differences here and there, but the GCN architecture is still fairly new for AMD, and any major overhaul is too far away to make it into consoles this year. In any case, I would still be very surprised if AMD on the one hand sold the existing architecture to Sony, and on the other hand sold something much more efficient to MS. There'll be differences based on differing hardware sensibilities between MS and Sony, and the amount of time each of them put into their final hardware, but we've got two consoles coming out at the same time with GPUs from the same company, so we shouldn't get ahead of ourselves in terms of potential differences between the two.
 

gofreak

GAF's Bob Woodward
32mb seems to indicate a rather limited use for such things, though.

That's why it seems weird.

It seems like Microsoft figured they needed a more balanced approach to bandwidth on both ends of the pipeline, rather than skewing the bandwidth to the bottom end of the pipeline as in Xenos. So they've taken ROPs back into the parent GPU, have them talk to eSRAM or DDR3, and let the eSRAM be read/write instead of write-only.

But surely it would just be way simpler, way more flexible, and faster too, to use a chunk of GDDR5?
 

Jadedx

Banned
Specs we can compare between all three systems.

RAM amount. 720 = 2X PS4 = 2x Wii U
RAM bandwidth. PS4 >>>> 720 ( a mover wont help too much since a certain amount of ESRAM will always be taken up by framebuffer data ) >>> Wii U.

GPU. Ps4. = 1.5x 720 = 2-3x Wii U.

All in all. The gap between all three systems isnt that big. Certainly, there's nothing near the gap with Wii vs HD consoles. It really is more of a Dreamcast vs XBox/GC situation.

Read the thread, this is not correct, even though the ps4 will have an advantage.


The wii u will be slower, than both hd systems. probably bigger difference than xbox vs ps2, probably not as bad as hd twins vs wii though.
 

Durante

Member
That's why it seems weird.

It seems like Microsoft figured they needed a more balanced approach to bandwidth on both ends of the pipeline, rather than skewing the bandwidth to the bottom end of the pipeline as in Xenos. So they've taken ROPs back into the parent GPU, have them talk to eSRAM or DDR3, and let the eSRAM be read/write instead of read-only.

But surely it would just be way simpler, way more flexible, and faster too, to use a chunk of GDDR5?
That's pretty much my line of thoughts as well.

I guess the issues are that GDDR5 is more expensive, and that MS really wanted 8 GB for their set-top box / OS level ambitions.
Once you're stuck with cheaper memory to fulfill that size goal, your only options are embedded RAM or a second separate bus. Which is REALLY expensive and complicates development.
 

Jadedx

Banned
That's why it seems weird.

It seems like Microsoft figured they needed a more balanced approach to bandwidth on both ends of the pipeline, rather than skewing the bandwidth to the bottom end of the pipeline as in Xenos. So they've taken ROPs back into the parent GPU, have them talk to eSRAM or DDR3, and let the eSRAM be read/write instead of read-only.

But surely it would just be way simpler, way more flexible, and faster too, to use a chunk of GDDR5?

Price.
 

aegies

Member
That's why it seems weird.

It seems like Microsoft figured they needed a more balanced approach to bandwidth on both ends of the pipeline, rather than skewing the bandwidth to the bottom end of the pipeline as in Xenos. So they've taken ROPs back into the parent GPU, have them talk to eSRAM or DDR3, and let the eSRAM be read/write instead of read-only.

But surely it would just be way simpler, way more flexible, and faster too, to use a chunk of GDDR5?

I *think* that there's a solution in place to compress data to move it across the bus then almost instantly decompress that data into memory. But it can be hampered by overly tesselated geometry, for example.

Again, I think, here. I wasn't told this, per se.
 

fritolay

Member
Lets be honest. If Sony or Microsoft's next machine has a leg up on raw gaming performance, that may not matter.

1) Ports to both systems will be key. Will people port MS system games to Sony, not optimize much, and call it a day due to costs? Probably. Everyone is saying how expensive development will be. MS is a software company and they are good at making development tools I don't know if the tools will be similar to the 360 to assist with the new hardware. If they do then I can see development on the MS system first then to the Sony. That is where MS has the leg up. Now if the MS machine has more raw power than the Sony, then again it could be a port and they just start making minor changes until it runs good enough to ship like we saw last generation.

2) Customers will not care about the raw gaming performance. It will be the games they see on TV and Youtube they will want to play. If both systems come out the same year then the third party developers, the length of time they have had their dev kits, etc will be huge at launch. If people are impressed enough they may switch systems see #3.

3) Online - Existing PSN and LIVE customers will want to keep their online profiles and will want to stick with Sony or MS unless there is a big reason to switch platforms.

4) Wow factor - Say one system was just a raw gaming machine and not much features, like motion cameras or tablet controllers, but just gaming. How good would it sell? I dunno. If a system comes out with the next killer feature (MS with DVR or some other trick up their sleeve) then it maybe the deciding factor after games. The Wii did this with motion controls on an inferior system with Nintendo IP games. People want devices to do more things to justify the upgrade. Sony had this with CD, DVD, and Blu-ray. DVD I think really pushed the PS2 at the time because you could get the best DVD player on the market at a time when not all DVD players would even play a new DVD. I know many that bought one and the justification was a DVD player for the price for many families

I don't like that MS maybe giving up gaming performance for Kinect. I think Kinect as it was last gen was a gimick and not worth losing a "system performance war" with Sony. However it seems like raw gaming performance if systems are close will not matter. Still don't like it.
 

derFeef

Member
That's why it seems weird.

It seems like Microsoft figured they needed a more balanced approach to bandwidth on both ends of the pipeline, rather than skewing the bandwidth to the bottom end of the pipeline as in Xenos. So they've taken ROPs back into the parent GPU, have them talk to eSRAM or DDR3, and let the eSRAM be read/write instead of read-only.

But surely it would just be way simpler, way more flexible, and faster too, to use a chunk of GDDR5?

maybe it's true now, but was not >1.5 years ago.
And mybe the large memory pool as some benefits, that 1st party devs wanted to have.
 

JaggedSac

Member
There may be a few feature differences here and there, but the GCN architecture is still fairly new for AMD, and any major overhaul is too far away to make it into consoles this year. In any case, I would still be very surprised if AMD on the one hand sold the existing architecture to Sony, and on the other hand sold something much more efficient to MS. There'll be differences based on differing hardware sensibilities between MS and Sony, and the amount of time each of them put into their final hardware, but we've got two consoles coming out at the same time with GPUs from the same company, so we shouldn't get ahead of ourselves in terms of potential differences between the two.

Why would AMD put technology in Sony's stuff that MS paid to be developed? That would 100% have been in a contract.

That's why it seems weird.

It seems like Microsoft figured they needed a more balanced approach to bandwidth on both ends of the pipeline, rather than skewing the bandwidth to the bottom end of the pipeline as in Xenos. So they've taken ROPs back into the parent GPU, have them talk to eSRAM or DDR3, and let the eSRAM be read/write instead of read-only.

But surely it would just be way simpler, way more flexible, and faster too, to use a chunk of GDDR5?

We don't know about the 4 move engines yet, they seem to play a key part in this.
 

Nibel

Member
Next-gen Xbox specs leak

Website vgleaks.com is claiming a world-wide exclusive by revealing the full spec for the upcoming next-generation Xbox, codenamed Durango. While there is obviously no official substantiation for the information posted, key elements of the spec match the overall outline of the hardware we have received from trusted sources and the leaker has come forward with proof about the origins of the information - and it appears genuine.

First up, let's deal with the elements of the spec we definitely know to be true: Durango features an eight-core CPU from AMD running at 1.6GHz, just like its upcoming next-gen PlayStation competitor - Orbis. As we explained last week, these are based on AMD's new PC technology, Jaguar - built for the entry-level laptop and tablet market. The initial PC Jaguar CPUs are configured in a quad-core arrangement - this doubles for both next-gen consoles.

In the case of Durango, the CPU is married up with 8GB of DDR3 memory, working in concert with 32MB of what is dubbed "ESRAM" - fast work RAM connected directly to the GPU. The two pools of memory operate in parallel, and while we haven't confirmed overall bandwidth, the leak's 170GB/s throughput certainly seems plausible. Also interesting about the RAM set-up is that the ESRAM isn't merely connected to the graphics core as is the case with the Xbox 360's 10MB of eDRAM - in Durango, it's hooked up to the northbridge (the interconnect between all major internal components), meaning it offers general access to other components in addition to the graphics core.

The leak also offers confirmation of last week's story that the new PlayStation Orbis graphics core appears - at face value - to be significantly more powerful than the GPU in Durango. Our sources suggest that the new PlayStation offers up 18 Radeon GCN compute units at 800MHz. The leak matches older rumours suggesting that Durango features only 12, running at the same clock speed. Bearing in mind the stated peak performance metrics in the leak, there is a clear deficit between the 1.23 teraflops offered by Durango, and the 1.84TF found in Orbis.

The leak also addresses the three mysterious hardware accelerator modules we mentioned last week in our Orbis piece. We find one of them covering audio (including echo cancellation tech for Kinect), while another appears to be an accelerated hardware video encoder - this is interesting in that we also find that the new information suggests an HDMI input as part of the design, not just an output. In theory then, users could record their TV shows direct from set-top boxes, or import their camcorder footage directly onto Durango. It's a remarkable inclusion, for sure, suggesting that Microsoft is indeed investing heavily in the media credentials of the device. The final hardware module is the most mysterious, named simply "Data Move Engines" for which there is no additional data supplied.

Other elements of the spec throw up some positive surprises too. Kinect appears to have its own dedicated input, suggesting that the problems introduced by using USB on the Xbox 360 could be mitigated. The fact there is an input at all suggests that the sensor will remain a separate and distinct unit that attaches to the console. The USB ports themselves are upgraded to the 3.0 standard - good for moving media files about and for high levels of bandwidth to game data. A large hard drive is included as standard (our sources suggest a 500GB minimum) while a 6x Blu-ray drive is also being mooted, which supports 50GB dual-layer discs. Networking is achieved with a fast gigabit Ethernet port, with both WiFi and WiFi Direct support.

So the question of the hour is, just how accurate is the information? Based on our communications with the leaker, the data appears genuine - the only real question is how recent it is. The proof presented by the source suggests that the data is at most nine months old: factoring in how long it takes to create a console, the chances are that there will not be many changes implemented since then.

Eurogamer
 

THE:MILKMAN

Member
That's why it seems weird.

It seems like Microsoft figured they needed a more balanced approach to bandwidth on both ends of the pipeline, rather than skewing the bandwidth to the bottom end of the pipeline as in Xenos. So they've taken ROPs back into the parent GPU, have them talk to eSRAM or DDR3, and let the eSRAM be read/write instead of read-only.

But surely it would just be way simpler, way more flexible, and faster too, to use a chunk of GDDR5?

I'm just throwing this out there as a wild guess. Could it be that the Data Move Engine needs this type of setup to give the benefits?
 
Let me see if this helps. For Durango:

Rendering into ESRAM: Yes.
Rendering into DRAM: Yes.
Texturing from ESRAM: Yes.
Texturing from DRAM: Yes.
Resolving into ESRAM: Yes.
Resolving into DRAM: Yes.

For the 360, that would be yes, no, no, yes, no, yes.

Imagine if what the data movers did was take output from the GPU and feed it quickly into the CPU while performing some super quick operations that allowed the CPU to process that data very quickly, and then shoot it back to the GPU where another data mover performs another transformation so that the GPU can then do it's part very quickly.

As if MS asked AMD to create a way to bridge the gap between the two so as to allow both CPU and GPU to assist each other in a very complimentary way. It would enable a form of heterogeneous computing that hasn't been seen yet.
 
So for the Durango to actually be a functional DVR, won't it need more than HDMI in? HDMI isn't going to feed it live broadcasts. Or is the suggestion now that the Durango is merely a means to control the DVR you already have?
 

open_mouth_

insert_foot_
I think MS realizes too that the "winners" of the last few generations have not been the most powerful piece of hardware... e.g. Wii/360, PS2, DS, 3DS, etc.

Price + Marketing + Form Factor + Games



.....

Power is somewhere down here
 
Read the thread, this is not correct, even though the ps4 will have an advantage.


The wii will be slower, than both hd systems. probably bigger difference than xbox vs ps2, probably not as bad as hd twins vs wii though.

Its bad if you look at the cpu. I think it might be just as bad. We don't have much on the Wii Specs to know for sure tho.
 

derFeef

Member
Imagine if what the data movers did was take output from the GPU and feed it quickly into the CPU while performing some super quick operations that allowed the CPU to process that data very quickly, and then shoot it back to the GPU where another data mover performs another transformation so that the GPU can then do it's part very quickly.

As if MS asked AMD to create a way to bridge the gap between the two so as to allow both CPU and GPU to assist each other in a very complimentary way. It would enable a form of heterogeneous computing that hasn't been seen yet.

If those things exist, they are more like compensations for the compromises that have been made. (if)
 

aegies

Member
Imagine if what the data movers did was take output from the GPU and feed it quickly into the CPU while performing some super quick operations that allowed the CPU to process that data very quickly, and then shoot it back to the GPU where another data mover performs another transformation so that the GPU can then do it's part very quickly.

As if MS asked AMD to create a way to bridge the gap between the two so as to allow both CPU and GPU to assist each other in a very complimentary way. It would enable a form of heterogeneous computing that hasn't been seen yet.

That is consistent with their language in materials aimed at developers, but I really don't know. I do know that HSA is mentioned repeatedly.
 

meta4

Junior Member
I think Sony should not include stuff like HDMI in right now which will only increase costs. Keep the price low and see how the market reacts. If there is a demand for this then they can always add that functionality in the future units right?
 
Proelite:-


Ok that would explain it but...

Durante:-


Agreed - why bother with ESRAM (which is costly, and has compromises with tiling etc) if the bandwidth is lower than GDDR5? Seems to defeat the purpose!

I suppose maybe it was the only way they could get the 8GB - someone did say it appears to be a memory driven design.

Colour me non-plussed so far - on speculative paper at least I think Orbis has the edge, but of course we'll just have to see.


Huh?

If it's enough BW to do the job, who cares if it's "less" than the other guy. The combined BW is 170 GB/s. Close enough.

The more we learn about Durango the more I like it. It seems to have a lot of really smart silicon. I think the writings on the wall it's going to be able to compete head up with Orbis, between the DMA engines, the ESRAM looking better and better, and the RAM quantity advantage.
 

Proelite

Member
Anyone got anything on the Data Move Engine?

Sounds like a DSP or FPGA or something.

I think Aegies can give more information. ;)

I feel like a lot of people see that 1.2 teraflop figure and then immediately underestimate the Durango.

It'll take a 2.5 teraflop or better GPU on the PC to match this highly customized GPU, which has customizations that will probably never be ported to PCs for the reason that PCs don't need them.
 

Jadedx

Banned
Another question, most people say the 32 megs is good for 1080p @ 2xmsaa; is that using a tiled rendering method or no? Because I read a few days ago on b3d that with tiling it could easily do 1080p w/ 4xmsaa
 

EvB

Member
I think MS realizes too that the "winners" of the last few generations have not been the most powerful piece of hardware... e.g. Wii/360, PS2, DS, 3DS, etc.

Price + Marketing + Form Factor + Games



.....

Power is somewhere down here



Ultimately as the platform holder Microsoft have enough sway with large publishers to demand platform parity with another console should it turn out that the other machine is significantly more powerful.
 
This thread reminds me of GAF in early 2005 - 2006. Everyone was convinced that the ps3 was like 2-3x more powerful than the 360 because the ps3 had more raw power, and there was no way the 360 would be as efficient with their ram and their shaders as they said they would.


PS3's on paper advantage was mostly down to the CPU's impressive peak math numbers, but it was a massively different paradigm to use efficiently. The system's GPU was not as modern and efficient as X360's. Its max vertex was much lower. PS3 relied on the CPU to make up the deficit. But not everyone got the same results, Around 2006-2007 many third parties didn't bother to push any graphics work off to the Cell.

But this time around, by all accounts, the PS4 will be easy to develop for. A conventional X86 with 8 homogeneous cores/threads. (Likely almost identical to the competition) and with a similar but definitively stronger GPU. The biggest difference in the hw seems to come in the memory pools and data buses which we don't have all of the details yet. But right now, I am skeptical that Durango is more powerful as some have once said. PS4 doesn't look inefficient at all. It looks well balanced and focused on videogames. I think Sony made the proper corrections.

Personally, if there is more than an iota of extra power with PS4, I honestly don't see it being used by third parties. It's all about the first parties then. And when it comes to first parties, I expect 343 to amaze us when they go beyond what can be expected from XB3.




Discounting first party games for a moment, the third parties will probably target the lowest common denominator between the systems. 3-3.5GB of ram to reuse the same assets between systems, 50GB of data which is the most a next xbox disc can apparently contain, and the most shading power the Xbox has to offer in whatever scene they are drawing. That scenario seems to favor PS4. I don't think they'll have to go out of their way to push the PS4. It will come naturally as a result of the extra raw shading power and bandwidth. The games will just scale assets in the engine like most PC/X360/PS3 games today. So I see a reversal next gen, I think PS4 may win the majority of the face-offs.
 

i-Lo

Member
Huh?

If it's enough BW to do the job, who cares if it's "less" than the other guy. The combined BW is 170 GB/s. Close enough.

The more we learn about Durango the more I like it. It seems to have a lot of really smart silicon. I think the writings on the wall it's going to be able to compete head up with Orbis, between the DMA engines, the ESRAM looking better and better, and the RAM quantity advantage.

So now we're once again heading to Xb3>Orbis.

This is like perennial oscillation. Some people here will outright sublimate during E3.
 
Top Bottom