• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

vg247-PS4: new kits shipping now, AMD A10 used as base, final version next summer

Binabik15

Member
You are approaching this in the complete wrong way. Both consoles are designed with different goals in mind. One will have faster frame rates, better texture streaming, and the ability to push more polygons. The other will offer more texture variety in static scenes (Heavy Rain), higher resolution textures, and better detail.

If the rumored specs turn out to be 100% on the money, the machines will have advantages for different genre's and exclusives will look more exceptional than they already do this generation. Third parties might have porting headaches, but all things considered, it will be a wash.


.

Which one is which and why does one have a framerate advantage by design? Is there anything to help with physics and fluid/cloth simulations and clipping in those rumoured gpus or cpus? And who cares for Heavy Rain :(
 

antic604

Banned
This will be extremely disappointing if true. I wish we do not have Skyrim like situations with the PS4

Well, it shouldn't - considering the amount of RAM is 4Gb then the jump in memory space (x6-8) will be much higher than in useful screen resolution (x2-3), so that alone should warrant that much more texture data is stored reducing the need to cache from HDD/BR and improving the pop-in. In the end it's all down to the programming tools available for devs and some other HW solutions we might not be aware of.

Looking at how this generation worked out for Sony (inferior 3rd party games) I'd hope they took notes about what needs to be improved.
 

meta4

Junior Member
Well, it shouldn't - considering the amount of RAM is 4Gb then the jump in memory space (x6-8) will be much higher than in useful screen resolution (x2-3), so that alone should warrant that much more texture data is stored reducing the need to cache from HDD/BR and improving the pop-in. In the end it's all down to the programming tools available for devs and some other HW solutions we might not be aware of.

Looking at how this generation worked out for Sony (inferior 3rd party games) I'd hope they took notes about what needs to be improved.

Hope so! So Considering the leaked specs of Orbis and Durango is it safe to say both are at power with minor differences and advantages here and there?
 

TronLight

Everybody is Mikkelsexual
100mB seem rather high - it is more expensive and complex to fab than eSRAM but has a lot higher density. I am not an expert but I guess they will need at least as much space to fit a 1080p picture with (some?) effects into it. That is why sometimes the 360 used sub-HD resolution because a whole 720p picture would not fit into 10MB.

For your DDR3 question:pC3-24000 DDR3 SDRAM has 24GB/s vs 48GB/s of GDDR5
(sorry might have mixed up Gbit/s and GB/s some posts ahed). Of course those values differ depending on clock rate, bus, etc.
Ah. I thought that since 360 has 10mb and WiiU has 32, the next XBox could have had something like 100, I knew that eDRAM was very expensive, but not that much. :lol

But then, if eDRAM has a bandwidth of 250GB/s more or less (read above) and GDDR5 can achieve 200GB/s, wouldn't be better not to use eDRAM but GDDR5 instead, since you can have more space with just a 50GB/s hit on the bandwidth?

So basically GDDR5 has a twice bandwidth of DDR3?
 
Which one is which and why does one have a framerate advantage by design? Is there anything to help with physics and fluid/cloth simulations and clipping in those rumoured gpus or cpus? And who cares for Heavy Rain :(
I believe he was suggesting the PS4 would have the framerate advantage. I guess this ties in with the mandatory 3D for all games rumour from a while back?
 

Durante

Member
Hope so! So Considering the leaked specs of Orbis and Durango is it safe to say both are at power with minor differences and advantages here and there?
If the current leaks are true, the overall capabilities should be somewhat similar, but achieved quite differently.
 

RoboPlato

I'd be in the dick
Damn, this thread got interesting after I went to bed. The news about the Durango having customizations to hit a similar performance level to a 680 and the Orbis having a 30% clockspeed advantage are both awesome rumors for their respective consoles. It also seems to shed light on why there's a lack of information/confidence in Sony. They're probably not doing across the board optimizations and are just focusing on customizing specific areas targeted by their first parties (hopefully in real time lighting and tesselation) and a lot of devs are still in the dark. The Durango customizations sound impressive if true but if my guess is right Sony's method saves them some cash and will probably get similar results in the end.

If these rumors are correct then both companies have hit the performance level I wanted/expected with cheaper, cooler, lower power parts and that's better for everyone. If devs are happy with it, we'll be happy.
 

iceatcs

Junior Member
I hope they will go for GDDR5. That will make PC happy. I want more game to use better optimise config for high-end especially people who have GDDR5 in their cards.
120Hz monitor will be become standard, just perfect timing.
 

thuway

Member
Damn, this thread got interesting after I went to bed. The news about the Durango having customizations to hit a similar performance level to a 680 and the Orbis having a 30% clockspeed advantage are both awesome rumors for their respective consoles. It also seems to shed light on why there's a lack of information/confidence in Sony. They're probably not doing across the board optimizations and are just focusing on customizing specific areas targeted by their first parties (hopefully in real time lighting and tesselation) and a lot of devs are still in the dark. The Durango customizations sound impressive if true but if my guess is right Sony's method saves them some cash and will probably get similar results in the end.

If these rumors are correct then both companies have hit the performance level I wanted/expected with cheaper, cooler, lower power parts and that's better for everyone. If devs are happy with it, we'll be happy.

I know nothing about software in development nor final specs, but if our sources are correct, than both machines have different architectures. This will foster a growth of exclusives that have particular areas of beauty that will be unmatched by third party development. It's a beautiful thing.
 
Damn, this thread got interesting after I went to bed. The news about the Durango having customizations to hit a similar performance level to a 680 and the Orbis having a 30% clockspeed advantage are both awesome rumors for their respective consoles. It also seems to shed light on why there's a lack of information/confidence in Sony. They're probably not doing across the board optimizations and are just focusing on customizing specific areas targeted by their first parties (hopefully in real time lighting and tesselation) and a lot of devs are still in the dark. The Durango customizations sound impressive if true but if my guess is right Sony's method saves them some cash and will probably get similar results in the end.

If these rumors are correct then both companies have hit the performance level I wanted/expected with cheaper, cooler, lower power parts and that's better for everyone. If devs are happy with it, we'll be happy.

The curious thing is that a 1.84 tflop gpu with 192 GBS bandwith to RAM will behave a little better that an AMD/ATI 7850 at 153,6 GBS. And on the other hand a special sauced 1,2 tflop GPU will behave like a Nvidia GTX 680. Strange.
 

RoboPlato

I'd be in the dick
I know nothing about software in development nor final specs, but if our sources are correct, than both machines have different architectures. This will foster a growth of exclusives that have particular areas of beauty that will be unmatched by third party development. It's a beautiful thing.
As long as I don't have to suffer through shit ports like this gen I'll be happy. I'll gladly take multiplatform games that are different visually but play to each consoles' strengths instead of one getting the shaft.
 

RoboPlato

I'd be in the dick
Consoles powerful enough to run the entire Crysis trilogy as intended. Sounds good to me.
I want a Crysis: Maximum Collection or something for next gen. All 3 on one disc at 1080p with all the eye candy. 2 is my favorite FPS this gen and 3 seems even better. I'd love to be able to play them properly.
 

i-Lo

Member
Damn, this thread got interesting after I went to bed. The news about the Durango having customizations to hit a similar performance level to a 680 and the Orbis having a 30% clockspeed advantage are both awesome rumors for their respective consoles. It also seems to shed light on why there's a lack of information/confidence in Sony. They're probably not doing across the board optimizations and are just focusing on customizing specific areas targeted by their first parties (hopefully in real time lighting and tesselation) and a lot of devs are still in the dark. The Durango customizations sound impressive if true but if my guess is right Sony's method saves them some cash and will probably get similar results in the end.

If these rumors are correct then both companies have hit the performance level I wanted/expected with cheaper, cooler, lower power parts and that's better for everyone. If devs are happy with it, we'll be happy.

From the outside it gives an impression of Sony playing reactionary catch up to ms who look prepared and steadfast in their plans. Maybe this is the reason behind developers' alleged 'lack of confidence'. Perhaps internally it's not so bad but this obfuscation really doesn't help.
 

Kagari

Crystal Bearer
As long as I don't have to suffer through shit ports like this gen I'll be happy. I'll gladly take multiplatform games that are different visually but play to each consoles' strengths instead of one getting the shaft.

If it makes you feel better, I've heard that developers really helped shaped the development of the machine this time.
 

i-Lo

Member
If it makes you feel better, I've heard that developers really helped shaped the development of the machine this time.

First or third parties? If it has been a joint effort, which side got the bigger say?

PS: I don't think they'd be pleased with the ram situation if the Os eats into the precious GDDR5 RAM.
 
There's lots of talk the last few pages as if there's been news...has there?

The VGLeaks thing is old. It IS the most substantial and most reasonable leak we had, but it's been there a long time.

Then we heard via VG247 that new dev kits had a lot more RAM (8GB or 16GB). And there was talk of a move away from Steamroller to Jaguar (I think just via forum whisperings here and elsewhere?)

Now Thuway is claiming it is 4GB GDDR5 in the final box. Is that the 'news' of the last few hours?

On that idea...I would congratulate Sony if they did bump to 4 while keeping it as GDDR5. When VG247's report came, I assumed the increase in RAM was a concession to size over speed, with a switch to regular DDR. I am a bit skeptical it's still GDDR5.
Apparently Truway got confirmation from someone and PM'd me and from his info and systemfehler's post with a cite that explains some of the VGleaks post we now have more confidence in that post. See the RED highlighted area in my post.

4 Gig GDDR5 is not confirmed but GDDR5 bandwidth is likely. Notice the VGleaks post does not mention GDDR5, but everyone reading it assumes GDDR5 because of the bandwidth. GDDR5 will pull too much current even clocked slow for XTV and idle/streaming. Using just a Temesh APU with second GPU turned off the minimal power would be 2-5 w (APU) plus 5 watts for the second GPU turned off plus if GDDR5 it would burn more power than APU plus turned off GPU combined. Correct if wrong.
 

Nizz

Member
Which one is which and why does one have a framerate advantage by design? Is there anything to help with physics and fluid/cloth simulations and clipping in those rumoured gpus or cpus? And who cares for Heavy Rain :(

I believe he was suggesting the PS4 would have the framerate advantage. I guess this ties in with the mandatory 3D for all games rumour from a while back?

Consoles powerful enough to run the entire Crysis trilogy as intended. Sounds good to me.

All this sounds great to me.
 
Apparently Truway got confirmation from someone and PM'd me and from his info and systemfehler's post with a cite that explains some of the VGleaks post we now have more confidence in that post. See the RED highlighted area in my post.

Does the pm reinforce your post and speculation of orbis? I sure hope so cause your version is more cutting edge and powerful enough to satisfy the fanbase for years. I think it's necessary in order to compete with microsoft which has more customization and backwards compatibility.
 

DieH@rd

Banned
4 Gig GDDR5 is not confirmed but GDDR5 bandwidth is likely. GDDR5 will pull too much current even clocked slow for XTV and idle/streaming.

Stacked DDR4 with 1024bit bus could reach 192 GB/s... and we all know that Sony likes stacking. :) GDDR5 is maybe present only in early devkits.

It would be amazing if Sony could pull off interposer with great APU and 192GB/s memory on it.
 
Damn, this thread got interesting after I went to bed. The news about the Durango having customizations to hit a similar performance level to a 680

It would probably be wise not to place blind faith in some magical fairy dust the Durango GPU is supposed to have, but that no one can name. We all saw how that kind of thinking turned out for the Nintendo stalwarts in the WiiU Speculation threads.
 

thuway

Member
It would probably be wise not to place blind faith in some magical fairy dust the Durango GPU is supposed to have, but that no one can name. We all saw how that kind of thinking turned out for the Nintendo stalwarts in the WiiU Speculation threads.

The difference is- this is Microsoft. They are not afraid to foot the bill. They are an intelligent company designed around their tech. You can bet on them not fooling around when it comes to the biggest trojan horse they have.
 

Reiko

Banned
It would probably be wise not to place blind faith in some magical fairy dust the Durango GPU is supposed to have, but that no one can name. We all saw how that kind of thinking turned out for the Nintendo stalwarts in the WiiU Speculation threads.

A big mistake to make is comparing the Full HD Twins to anything that happened with the Wii U.
 

i-Lo

Member
The real question is... with those rumored specs, how would PS4 handle something like Luminous Engine.

It required 1.8GB VRAM. At this point it's of course running at 1080p at 60fps without optimization for final spec'd consoles.

Pertaining to RAM, 4GB may sound a lot until OS eats into it. Even then people may think it be sufficient. Unlike processor, whose time time is maximized for 100% utilization, RAM is never full. In fact, the base assets would take up a lot less than 256MB VRAM- OS consumption (PS3). The overhead is kept so that the game can draw in more information so that it can run smoothly (much like scratch pad). Here's an example:

kz3opt12ki6i.jpg

So in effect, around 3/4ths (which is still higher than what's in the picture but that's in MB and next gen will be in GB) of the RAM available would be utilized fully. As such, Agni would perhaps not fit without reduction in either framerate and/or resolution.

People downplay this but every single megabyte of memory when it comes to RAM counts.
 

THE:MILKMAN

Member
Stacked DDR4 with 1024bit bus could reach 192 GB/s... and we all know that Sony likes stacking. :) GDDR5 is maybe present only in early devkits.

It would be amazing if Sony could pull off interposer with great APU and 192GB/s memory on it.

And wouldn't it be the case that DDR4 stacked while maybe more expensive/complex at first be cheaper in the longer run?
 

gofreak

GAF's Bob Woodward
It would probably be wise not to place blind faith in some magical fairy dust the Durango GPU is supposed to have, but that no one can name. We all saw how that kind of thinking turned out for the Nintendo stalwarts in the WiiU Speculation threads.

When people talk about a console GPU having similar performance of another PC GPU...I think a certain amount of that is just natural console optimisation. For example, taking software or technology that runs on a 680 on a PC and getting it to run more or less the same on a console that has less raw grunt.

Now, these GPUs, and Durango's GPU may well be tweaked for performance that is like for like akin to more powerful GPUs in some respects, aside from just 'console optimisation'...for example high fillrate to slice of eDRAM. But I think a lot of that talk that makes a direct comparison with a PC GPU is just talking about how a closed box system is keeping up with PC systems that devs were previously working with or running tech on. The idea of console hardware punching above its weight in the more ordinary sense.
 
Does the pm reinforce your post and speculation of orbis? I sure hope so cause your version is more cutting edge and powerful enough to satisfy the fanbase for years. I think it's necessary in order to compete with microsoft which has more customization and backwards compatibility.
I am still speculating but apparently Truway got some confirmation from an insider. The Sony VP Technology platform specs for PS4 = the VG leaks specs so I'm in.
 
Stacked DDR4 with 1024bit bus could reach 192 GB/s... and we all know that Sony likes stacking. :) GDDR5 is maybe present only in early devkits.

It would be amazing if Sony could pull off interposer with great APU and 192GB/s memory on it.
Likely it's a 512 bit buss (Yole assumption) and a faster 20nm DDR4. DDR3 with a 1024 bit buss can do 2000GBit/sec= 250GB/sec. The DDR4 specs we have so far seen are slower than DDR3.
 

DieH@rd

Banned
And wouldn't it be the case that DDR4 stacked while maybe more expensive/complex at first be cheaper in the longer run?

Of course, stacking yields will go up and ram price will go down with time. GDDR5 on the other hand is slowly becoming less and less used and large manufacturers are not confident about its price...
 
When people talk about a console GPU having similar performance of another PC GPU...I think a certain amount of that is just natural console optimisation. For example, taking software or technology that runs on a 680 on a PC and getting it to run more or less the same on a console that has less raw grunt.

Now, these GPUs, and Durango's GPU may well be tweaked for performance that is like for like akin to more powerful GPUs in some respects, aside from just 'console optimisation'...for example high fillrate to slice of eDRAM. But I think a lot of that talk that makes a direct comparison with a PC GPU is just talking about how a closed box system is keeping up with PC systems that devs were previously working with or running tech on. The idea of console hardware punching above its weight in the more ordinary sense.

That's all well and good, but in the Durango 888 thread they're all talking about super-secret hardware customizations that act like some kind of force multiplier to the extent that relative programmable flops is no longer a valid performance measure. Based on the vaguest of vague rumors, I might add. Most assume they are things the Orbis will not have, when I think it's far more likely any actual source was probably talking about what you're saying: closed box, more efficient API, deeper access to hardware features, customized memory architectures, etc. Of course, all those things are just as true about the Orbis. I think a lot of people are having dreams of magical rasterizing co-processors and hardware ray-tracers and other such fantastical nonsense.
 

Reiko

Banned
That's all well and good, but in the Durango 888 thread they're all talking about super-secret hardware customizations that act like some kind of force multiplier to the extent that relative programmable flops is no longer a valid performance measure. Based on the vaguest of vague rumors, I might add. Most assume they are things the Orbis will not have, when I think it's far more likely any actual source was probably talking about what you're saying: closed box, more efficient API, deeper access to hardware features, customized memory architectures, etc. Of course, all those things are just as true about the Orbis. I think a lot of people are having dreams of magical rasterizing co-processors and hardware ray-tracers and other such fantastical nonsense.


Or MS trying to gauge more power without breaking bank. It's not rocket science.
 

thuway

Member
That's all well and good, but in the Durango 888 thread they're all talking about super-secret hardware customizations that act like some kind of force multiplier to the extent that relative programmable flops is no longer a valid performance measure. Based on the vaguest of vague rumors, I might add. Most assume they are things the Orbis will not have, when I think it's far more likely any actual source was probably talking about what you're saying: closed box, more efficient API, deeper access to hardware features, customized memory architectures, etc. Of course, all those things are just as true about the Orbis. I think a lot of people are having dreams of magical rasterizing co-processors and hardware ray-tracers and other such fantastical nonsense.

These words are from the mouths of developers. Microsoft has the deepest pockets around.
 
And the best sorcerers? If you want to tell me they're packing a 3 TF GPU onto an enormous APU and eating the costs, I'll believe that before I believe there is some secret sauce they've discovered that no one else has thought of that miraculously makes a 768 GFlop GPU perform like a 3 TFlop one.

Or MS trying to gauge more power without breaking bank. It's not rocket science.

No, it's physics.
 
From BY3D by a Jan 2013 new member Hecatoncheires which is the name of an AMD family of GPUs:

Hey guys,

I have a few questions concerning the rumour of an APU and GPU combination concerning the new PlayStation, if that is agreeable. Maybe someone can bring a little light into my darkness.

I stumbled upon this slide from the Fusion Developer Summit which took place in June 2012. The slide deals with GPGPU algorithms in video games. There are a couple of details that are probably somewhat interesting when speculating about a next generation gaming console.

As far as I understand, AMD argues that today GPGPU algorithms are used for visual effects only, for example physics computations of fluids or particles. That is because developers are facing an insurmountable bottleneck on systems that use homogeneous processor architectures. AMD calls it the copy overhead. This copy overhead originates from the copy work between the CPU and the GPU that can easily take longer than the processing itself. Due to this problem game developers only use GPGPU algorithms for visual effects that don't need to be sent back to the CPU. AMD's solution for this bottleneck is a unified adress space for CPU and GPU and other features that have been announced for the upcoming 2013 APUs Kabini (and Kaveri).

But these features alone are only good for eliminating the copy overhead. Developers still have to deal with another bottleneck, namely the saturated GPU. This problem is critical for GPGPU in video games since the GPU has to deal with both, game code and GPGPU algorithms at once. I'm not sure whether this bottleneck only exists for thick APIs like DirectX or if it also limits an APU that is coded directly to the metal. Anyway, AMD claims that a saturated GPU makes it hard for developers to write efficient GPGPU code. To eliminate this bottleneck AMD mentions two solutions: Either you can wait for a 2014 HSA feature that is called Graphics Pre-Emption, or you can just use an APU for the GPGPU algorithms and a dedicated GPU for graphics rendering. The latter is what AMD recommends explicitly for video gaming and they even bring up the similarities to the PlayStation 3, which renownedly uses SIMD co-processors for all kinds of tasks.


I would like to know what you guys think about these slides.


What if AMD was building an 28nm APU for Sony that is focused solely on GPGPU, for example four big Steamroller cores with very fast threads in conjunction with a couple of MIMD engines? Combine it with a dedicated GPU and a high bandwidth memory solution and you have a pretty decent next gen console.

I would also like to know if an APU + GPU + RAM system in package is possible with 2.5D stacking, which was forecasted by Yole Development for the Sony PlayStation 4, for IBM Power8 and Intel Haswell.

And since Microsoft is rumoured to have a heavily customized chip with a "special sauce", could that mean they paid AMD to integrate the 2014 feature Graphics Pre-Emption in the XBox processor, so they can go with one single ultra-low latency chip instead of a FLOP-heavy system in package?

In any case it gives us information to confirm APU + GPU and Kabini or Kavari as base designs. Kabini because of Sweetvar26 (Jagaur) and low power XTV-streaming media.
 

Reiko

Banned
And the best sorcerers? If you want to tell me they're packing a 3 TF GPU onto an enormous APU and eating the costs, I'll believe that before I believe there is some secret sauce they've discovered that no one else has thought of that miraculously makes a 768 GFlop GPU perform like a 3 TFlop one.



No, it's physics.

Custom silicon is new to you?
 

mrklaw

MrArseFace
Custom silicon is new to you?

I guess his point is that - unless the custom silicon is a 2.2TF GPU - you don't just get a leap like that from a couple of coprocessors.


I'm similarly skeptical but also curious what they might be. Just as we think we have some of the rumours locked down in reality, here we are with another vague spec to try and decode.
 

gofreak

GAF's Bob Woodward
That's all well and good, but in the Durango 888 thread they're all talking about super-secret hardware customizations that act like some kind of force multiplier to the extent that relative programmable flops is no longer a valid performance measure. Based on the vaguest of vague rumors, I might add. Most assume they are things the Orbis will not have, when I think it's far more likely any actual source was probably talking about what you're saying: closed box, more efficient API, deeper access to hardware features, customized memory architectures, etc. Of course, all those things are just as true about the Orbis. I think a lot of people are having dreams of magical rasterizing co-processors and hardware ray-tracers and other such fantastical nonsense.

I'm not au fait with these rumours or speculation, I'm scarcely keeping up with the stuff in this thread it seems.

But if people are talking about something more than the usual 'closed box' performance multiplier, I would guess it has to do with eDRAM. Beyond that...I'd invite anyone hinting at that to shit or get off the pot :p It's pointless to try and debate not knowing what's being talked about. If there is anything to talk about we'll know about it in a few months I guess.
 

Reiko

Banned
I guess his point is that - unless the custom silicon is a 2.2TF GPU - you don't just get a leap like that from a couple of coprocessors.


I'm similarly skeptical but also curious what they might be. Just as we think we have some of the rumours locked down in reality, here we are with another vague spec to try and decode.

I take it as we're missing a significant piece of Durango specs. And maybe more so for Orbis.

Put it this way. Any news/specs that are spot on will be removed on NeoGAF. Just like it has happened before.
 
Custom silicon is new to you?

It's all custom silicon if you can't buy it at Fry's. AMD already has one of the most efficient high performance GPU core technologies available. No one at MS came along and said "hey, if you added this tiny unit it would make your whole GPU 3 times faster". All MS is doing is deciding how much money and time they can spend and how much heat they can get rid of. AMD will make the GPU as big as they want, and they'll give it whatever memory you want, but there is no secret thing they can "customize" to magically make it work 200% better.
 

Reiko

Banned
It's all custom silicon if you can't buy it at Fry's. AMD already has one of the most efficient high performance GPU core technologies available. No one at MS came along and said "hey, if you added this tiny unit it would make your whole GPU 3 times faster". All MS is doing is deciding how much money and time they can spend and how much heat they can get rid of. AMD will make the GPU as big as they want, and they'll give it whatever memory you want, but there is no secret thing they can "customize" to magically make it work 200% better.

Also refer to my previous post.
 

Ales

Neo Member
And the best sorcerers? If you want to tell me they're packing a 3 TF GPU onto an enormous APU and eating the costs, I'll believe that before I believe there is some secret sauce they've discovered that no one else has thought of that miraculously makes a 768 GFlop GPU perform like a 3 TFlop one.



No, it's physics.

I agree with you here someone believes in fairy tales.A GPU can have all the customization of this world, but you can not turn 1.2 TFLOPS GPU into a monster.
 

THE:MILKMAN

Member
That's all well and good, but in the Durango 888 thread they're all talking about super-secret hardware customizations that act like some kind of force multiplier to the extent that relative programmable flops is no longer a valid performance measure. Based on the vaguest of vague rumors, I might add. Most assume they are things the Orbis will not have, when I think it's far more likely any actual source was probably talking about what you're saying: closed box, more efficient API, deeper access to hardware features, customized memory architectures, etc. Of course, all those things are just as true about the Orbis. I think a lot of people are having dreams of magical rasterizing co-processors and hardware ray-tracers and other such fantastical nonsense.

Here is a exchange from Bkilian over at B3D who is a confirmed (ex) MS insider.

Rangers said:
OK, about the three blocks of custom external hardware to assist in gaming tasks, I've realized/found out these are in Durango at large, not necessarily the GPU. They could be CPU related or anything I guess.

Only it's known that one of the blocks is in the GPU.

So I dont know, maybe an audio DSP for one? A section of vector units to assist the CPU's? The GPU block could just be the ESRAM/related goodies?

bkilian said:
I can imagine what three blocks they're talking about, and at least two of them will help with graphics, although one is more general purpose. I have a special fondness for the third, since I spent many hours coming to terms with it's idiosyncracies.

I think with the above it's almost confirmation Durango has multiple customisations. Of Orbis could too, we just don't have a talkative Sony insider afaik.
 
Top Bottom