• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Japanese Kutaragi Interview, on PS3, Nvidia, eDram etc.

HyperionX

Member
Shogmaster said:
I can understand certain claims are a bit much to digest at once, but overall sense I get is that ATI has everything to prove, and nVidia has none to prove. Tell me if I'm off on that.

Anyways, seeing how Xenos has 330M trannies (250M for main shader unit + 20M for the AA, Stencil, Z sort ROP on the daughter and 80M for 10MB of EDRAM minus the ROP on the daughter die), and RSX has 300M (260M for next gen part and 40M for GS perhaps?), It's hard to swallow RSX having 50% more rendering power than Xenos BS Sony and nVidia is throwing around, especially looking at the efficiency built into Xenos.

Xenos doesn't have 330M transistors from what I've heard. In fact the GPU unit itself is only 150M not including the eDRAM + AA unit.
 
Shogmaster said:
.

Anyways, seeing how Xenos has 330M trannies (250M for main shader unit + 20M for the AA, Stencil, Z sort ROP on the daughter and 80M for 10MB of EDRAM minus the ROP on the daughter die), and RSX has 300M (260M for next gen part and 40M for GS perhaps?), It's hard to swallow RSX having 50% more rendering power than Xenos BS Sony and nVidia is throwing around, especially looking at the efficiency built into Xenos.



whoa hang on there. I havent heard this before.


official info is: Xenos has 232 million transistors for the GPU core, and 100 million for the eDRAM unit which contains the eDRAM plus some logic, including 192 little "processors" for AA.


RSX: has 300 million or slightly over 300 million transistors, no eDRAM, and no mention of an on-board Graphics Synthesizer.
 

gofreak

GAF's Bob Woodward
HyperionX said:
Xenos doesn't have 330M transistors from what I've heard. In fact the GPU unit itself is only 150M not including the eDRAM + AA unit.

Apparently it's 232m for Xenos itself and 100m for the eDram unit. 20m of the eDram unit's transistors are likely computational logic for AA etc (10MB of eDram likely takes 80m transistors), so non-eDram logic total would be ~250m.
 

dorio

Banned
Drek said:
Hmm, I'm personally expecting more of the reverse, the CPU being used to supplement the GPU, at least from the PS3. With the 7 SPEs and the fast bi-directional bandwidth Sony can effectively cheat to get noticably better visuals than the X360. They'll have to sacrafice general computing and non-graphics in game operations like physics and AI, but they'll still be miles ahead of last generation. Smart move by Sony if that is their plan, since the average consumer equates graphics with overall system power, and also is much less capable of noticing differences in physics, AI, and other general computing functions.
It depends on your definition of "better visuals". The xgpu can put out a 4X multisample AA hidef image with little hit on performance. It remains to be seen if the RSX can do the same.
 

HyperionX

Member
gofreak said:
Apparently it's 232m for Xenos itself and 100m for the eDram unit. 20m of the eDram unit's transistors are likely computational logic for AA etc (10MB of eDram likely takes 80m transistors), so non-eDram logic total would be ~250m.

Ok, it seems I had out of date info.

Anyways, back to Shogmaster. nVidia has maintained that unified shaders use too many transistors or whatever, so it is definitely conceivable that there could be a significant performance advantage even if the transistor count is similar. Also, newer != better so it is to be seen whether all the "efficiencies" add up to anything.
 

gofreak

GAF's Bob Woodward
dorio said:
It depends on your definition of "better visuals". The xgpu can put out a 4X multisample AA hidef image with little hit on performance.

It depends what you mean by hit on performance. That AA isn't free..it may not hit the performance attainable by Xenos much, but that performance might have been higher if the transistors spent on eDram were spent elsewhere (i.e. on shaders). So yeah, little hit on the performance the chip is giving you, but a hit relative to chip that might have been if those trannies were spent on shading power? Yes.
 

aaaaa0

Member
gofreak said:
It's natural that a more critical eye will be cast on ATi's offering, since they're the ones making bolder claims, architecturally anyway.

I would say the same about CELL. But there is a distinct lack of scepticism from certain parties at various message boards (not to point any fingers). :)
 

gofreak

GAF's Bob Woodward
aaaaa0 said:
I would say the same about CELL. But there is a distinct lack of scepticism from certain parties at various message boards (not to point any fingers). :)

Cell has come under massive analysis etc. for years now. And there's been plenty of nay-saying, alongside the yay-saying. Plenty of questions asked, it came under very close and critical scrutiny. Things have quietened down since debate has gone as far as it can, really, from what we have on it, but some fresh points of argument may re-emerge when IBM opens up the hardware to everyone sometime in the summer. Although we do really have an awful lot of info on it as is, so..

All that said, Xenos has its "groupies" too. How critically or sceptically you think something is being received may be a matter of perception..reminds me of that thread bemoaning GAF's "sony bias". If you're sensitive to something, you'll pick up on it in an unbalanced fashion, and it'll seem like it's everywhere.
 

Wunderchu

Member
gofreak said:
Cell has come under massive analysis etc. for years now. And there's been plenty of nay-saying, alongside the yay-saying. Plenty of questions asked, it came under very close and critical scrutiny. Things have quietened down since debate has gone as far as it can, really, from what we have on it, but some fresh points of argument may re-emerge when IBM opens up the hardware to everyone sometime in the summer. Although we do really have an awful lot of info on it as is, so..

All that said, Xenos has its "groupies" too. How critically or sceptically you think something is being received may be a matter of perception..reminds me of that thread bemoaning GAF's "sony bias". If you're sensitive to something, you'll pick up on it in an unbalanced fashion, and it'll seem like it's everywhere.
I agree
 
midnightguy said:
whoa hang on there. I havent heard this before.


official info is: Xenos has 232 million transistors for the GPU core, and 100 million for the eDRAM unit which contains the eDRAM plus some logic, including 192 little "processors" for AA.

Yeah. I worded that poorly. My 250ish M = 230ish M for main Xenos core + 20M for AA, Stencil, and Z check ROP on the daughter board. And the remaining 80M is for the 10MB DRAM. All told 330ish M, or your 332M.

RSX: has 300 million or slightly over 300 million transistors, no eDRAM, and no mention of an on-board Graphics Synthesizer.

Well, it's my crazy specualtion, but where else would it go? It's gotta go somewhere. And 40ish M trannies is little too big to shove into anything else.....
 

gofreak

GAF's Bob Woodward
Shogmaster said:
Well, it's my crazy specualtion, but where else would it go? It's gotta go somewhere. And 40ish M trannies is little too big to shove into anything else.....

I don't think we can really say that when we've no idea what NVidia's next-gen shaders look like, for example, and how many trannies they eat. Or how much 128-bit HDR etc. would cost in terms of trannies. It's perfectly plausible than RSX is spending more logic, it's not bound to track the amounts spent in Xenos's shader core. It's certainly very possible to spend that amount on the GPU without explaining it by the presence of eDram of a GS.
 

dorio

Banned
gofreak said:
It depends what you mean by hit on performance. That AA isn't free..it may not hit the performance attainable by Xenos much, but that performance might have been higher if the transistors spent on eDram were spent elsewhere (i.e. on shaders). So yeah, little hit on the performance the chip is giving you, but a hit relative to chip that might have been if those trannies were spent on shading power? Yes.
Shader trannies were not traded for edram trannies. They don't even sit on the same card. It's not like ATI would have doubled the pipes if they excluded the edram.
I don't think we can really say that when we've no idea what NVidia's next-gen shaders look like, for example. Or how much 128-bit HDR etc. would cost in terms of trannies. It's perfectly plausible than RSX is spending more logic, it's not bound to track the amounts spent in Xenos's shader core.
Will the g70 have this 128-bit HDR rendering?
 

Pimpwerx

Member
RSX has like 30% more trannies dedicated to shaders, and a 10% faster clock speed. If NVidia got it running fp16 or even fp32 HDR at useable resolutions, then I can't see why it won't come out ahead in the GPU race. The AA may end up stuck at 2xMSAA to be free, but that's not a huge deal I don't think. I don't believe the swing between 2x and 4x would be significant enough to make up for potentially big effects advantages. I say this again based on the fact that Sony seems to have placed emphasis on HDR and SSS in their demos. I think a recurring theme was that they had devs shoot for the moon on lighting. The VS power of Cell combined with possibly deep shader pipes on RSX coule make the PS3 graphics advantage significant. But I'm also a bit of a dreamer. ;) PEACE.
 

gofreak

GAF's Bob Woodward
dorio said:
Shader trannies were not traded for edram trannies. They don't even sit on the same card. It's not like ATI would have doubled the pipes if they excluded the edram.

If you want to boil it all down to the very basics, money, if MS wasn't spending dollars on a seperate eDram module, they could be spending it on more silicon for the main GPU, Xenos. There certainly is a tradeoff there.

dorio said:
Will the g70 have this 128-bit HDR rendering?

It would seem so, yes.
 

dorio

Banned
Pimpwerx said:
RSX has like 30% more trannies dedicated to shaders, and a 10% faster clock speed. If NVidia got it running fp16 or even fp32 HDR at useable resolutions, then I can't see why it won't come out ahead in the GPU race. The AA may end up stuck at 2xMSAA to be free, but that's not a huge deal I don't think. I don't believe the swing between 2x and 4x would be significant enough to make up for potentially big effects advantages. I say this again based on the fact that Sony seems to have placed emphasis on HDR and SSS in their demos. I think a recurring theme was that they had devs shoot for the moon on lighting. The VS power of Cell combined with possibly deep shader pipes on RSX coule make the PS3 graphics advantage significant. But I'm also a bit of a dreamer. ;) PEACE.
That's all subjective. Personally I think AA quality is the biggest difference between realtime graphics and prerendered ie. Toy Story, Shrek etc.
 

Kleegamefan

K. LEE GAIDEN
Another question...

Did nVidia say for sure RSX would have vertex processing units??

I know there was alot of talk at first of the nVidia GPU having nothing but Pixel Shader units and ROPs, with CELL feeding it verticies...has that been debunked? (just curious)

If, however, RSX has, say, 8 vertex units, they could also recieve vertex assist from SPEs too, no??

Just wanting to know if I am on the right page here....

dorio said:
That's all subjective. Personally I think AA quality is the biggest difference between realtime graphics and prerendered ie. Toy Story, Shrek etc.

What leads you to believe RSX will not have good AA quality??
 

HyperionX

Member
Kleegamefan said:
Another question...

Did nVidia say for sure RSX would have vertex processing units??

I know there was alot of talk at first of the nVidia GPU having nothing but Pixel Shader units and ROPs, with CELL feeding it verticies...has that been debunked? (just curious)

If, however, RSX has, say, 8 vertex units, they could also recieve vertex assist from SPEs too, no??

Just wanting to know if I am on the right page here....

I believe in the Sony's press conference at E3 it was announced that the RSX will have vertex shaders.
 
Kleegamefan said:
Another question...

Did nVidia say for sure RSX would have vertex processing units??

I know there was alot of talk at first of the nVidia GPU having nothing but Pixel Shader units and ROPs, with CELL feeding it verticies...has that been debunked? (just curious)

I remember reading somewhere that nVidia has acknowledged there being vertex shaders on the RSX. DOn't quote me on it thought.

If, however, RSX has, say, 8 vertex units, they could also recieve vertex assist from SPEs too, no??

Just wanting to know if I am on the right page here....

I think that's precisely how it will work.

What leads you to believe RSX will not have good AA quality??

No one said it will have poor AA quality. It's just that doing AA on the RSX will eat up quite a bit of it's bandwidth (like all other "normal" GPUs), especially at 1080p.
 

Elios83

Member
Kleegamefan said:
Another question...

Did nVidia say for sure RSX would have vertex processing units??

I know there was alot of talk at first of the nVidia GPU having nothing but Pixel Shader units and ROPs, with CELL feeding it verticies...has that been debunked? (just curious)

If, however, RSX has, say, 8 vertex units, they could also recieve vertex assist from SPEs too, no??

Just wanting to know if I am on the right page here....



What leads you to believe RSX will not have good AA quality??

RSX has both pixel and vertex shaders.SPEs can act like vertex shaders too but of course the GPU puts the limit on the number of vertexs it can rasterize.
 

gofreak

GAF's Bob Woodward
Kleegamefan said:
Another question...

Did nVidia say for sure RSX would have vertex processing units??

They didn't elaborate on the configuration in RSX at all, IIRC, but it's very safe to say it has vertex units, given that stuff like UE3 was running on the system without any help from the SPEs, and they're talking about taking shading code from the PC and running it on PS3 without any modification etc.

Kleegamefan said:
If, however, RSX has, say, 8 vertex units, they could also recieve vertex assist from SPEs too, no??

Yes, they can.

For example, from Watch Impress's David Kirk interview:

David Kirk: SPE and RSX can work together. SPE can preprocess graphics data in the main memory or postprocess rendering results sent from RSX.

Nishikawa's speculation: for example, when you have to create a lake scene by multi-pass rendering with plural render targets, SPE can render a reflection map while RSX does other things. Since a reflection map requires less precision it's not much of overhead even though you have to load related data in both the main RAM and VRAM. It works like SLI by SPE and RSX.

David Kirk: Post-effects such as motion blur, simulation for depth of field, bloom effect in HDR rendering, can be done by SPE processing RSX-rendered results.

Nishikawa's speculation: RSX renders a scene in the main RAM then SPEs add effects to frames in it. Or, you can synthesize SPE-created frames with an RSX-rendered frame.

David Kirk: Let SPEs do vertex-processing then let RSX render it.

Nishikawa's speculation: You can implement a collision-aware tesselator and dynamic LOD by SPE.


David Kirk: SPE and GPU work together, which allows physics simulation to interact with graphics.

Nishikawa's speculation: For expression of water wavelets, a normal map can be generated by pulse physics simulation with a height map texture. This job is done in SPE and RSX in parallel
 

dorio

Banned
gofreak said:
If you want to boil it all down to the very basics, money, if MS wasn't spending dollars on a seperate eDram module, they could be spending it on more silicon for the main GPU, Xenos. There certainly is a tradeoff there. It would seem so, yes.
I don't think its that simple. It's hard to tell since the xephos architecture is so different than current cards but if their claims of 32 pipe equivalance is correct then that performance is in line with what cards coming out at that time frame would have. Their goal though is to achieve the actual peak performance that these pr guys are throwing out there by moving bandwidth consuming functions to another processor and memory store. Now if the xephos had 16 pipes then I'd say yes you're probably right that they traded shading ability to include the edram.
 

dorio

Banned
Kleegamefan said:
What leads you to believe RSX will not have good AA quality??
My point was that if the RSX doesn't have edram then that good AA quality will come with a big performance hit if its like current gen cards.
 

Fafalada

Fafracer forever
ShogMaster said:
Well, it's my crazy specualtion, but where else would it go?
It's a bit more then that - it's going into the silly realm. People keep talking about how little modification NVidia had time to do and here you'd go and stuff a whole extra chip complete with its external interfaces onto RSX.

GS on 90nm is just over ~40mm2, it's a pretty tiny chip compared to the rest of the stuff in there. Though personally I don't like the idea of using GS in there - it's a chip that has completely no use in the system other then playing PS2 games.
 

gofreak

GAF's Bob Woodward
dorio said:
I don't think its that simple. It's hard to tell since the xephos architecture is so different than current cards but if their claims of 32 pipe equivalance is correct then that performance is in line with what cards coming out at that time frame would have. Their goal though is to achieve the actual peak performance that these pr guys are throwing out there by moving bandwidth consuming functions to another processor and memory store. Now if the xephos had 16 pipes then I'd say yes you're probably right that they traded shading ability to include the edram.

According to Dave Baumman, the guy who made the "32-pipe" comparison was off base. Such comparison really can't be made without benchmarks on final hardware.

It should be really quite clear that if they weren't spending money on eDram it could have been spent elsewhere. When you look at the transistor budget of the Xenos total and compare it to PC chips coming out around the same time, or RSX even, that supports the notion of a tradeoff. The total transistor budgets are similar, but MS chose to spend theirs differently.

dorio said:
My point was that if the RSX doesn't have edram then that good AA quality will come with a big performance hit if its like current gen cards.

The hit really depends on what you're doing. But yes, relative to its own performance without 4xAA, of course there will be one, but relative to other chips..that's a trickier comparison..
 
Fafalada said:
It's a bit more then that - it's going into the silly realm. People keep talking about how little modification NVidia had time to do and here you'd go and stuff a whole extra chip complete with its external interfaces onto RSX.

GS on 90nm is just over ~40mm2, it's a pretty tiny chip compared to the rest of the stuff in there. Though personally I don't like the idea of using GS in there - it's a chip that has completely no use in the system other then playing PS2 games.


Crazy, silly, it's just my cup of tea. ;)

But seriously, in order to resolve the kind of BC Sony is promising, you have to put GS in the PS3 somewhere. The only logical place IMO would be in the RSX, where you can reuse the same data pathways that RSX would be using for graphics.



gofreak said:
It should be really quite clear that if they weren't spending money on eDram it could have been spent elsewhere. When you look at the transistor budget of the Xenos total and compare it to PC chips coming out around the same time, or RSX even, that supports the notion of a tradeoff. The total transistor budgets are similar, but MS chose to spend theirs differently.

Why limit things to only looking at transistor budget? Surely, the money Sony is putting into BR is money not going into more gaming related features/powers?
 

Lord Error

Insane For Sony
That's all subjective. Personally I think AA quality is the biggest difference between realtime graphics and prerendered ie. Toy Story, Shrek etc.
You can have an impressive level of AA on PC games right now, and it doesn't help them look anywhere near as good as quality CGI looks. As for the specs, 4x AA on R500 is basically free (2-5% hit), but that's not really even close to the level of CGI AA. Also, I think current Nvidia cards have 2xAA for free, and 4x with some hit, but I have no idea how much of a hit.

Will the g70 have this 128-bit HDR rendering?
I'm pretty sure the leaked specs said it will have 64bit HDR and blending, but those are just leaked specs, so who knows.
 

dorio

Banned
gofreak said:
According to Dave Baumman, the guy who made the "32-pipe" comparison was off base. Such comparison really can't be made without benchmarks on final hardware.

It should be really quite clear that if they weren't spending money on eDram it could have been spent elsewhere. When you look at the transistor budget of the Xenos total and compare it to PC chips coming out around the same time, or RSX even, that supports the notion of a tradeoff. The total transistor budgets are similar, but MS chose to spend theirs differently.



The hit really depends on what you're doing. But yes, relative to its own performance without 4xAA, of course there will be one, but relative to other chips..that's a trickier comparison..
Thanks, hadn't read that from Dave. I think ~250 trannies is in line with the transistor counts of the ATI cards being released this year. Do you know the trannie count for the 520?

You can have an impressive level of AA on PC games right now, and it doesn't help them look anywhere near as good as quality CGI looks. As for the specs, 4x AA on R500 is basically free (2-5% hit), but that's not really even close to the level of CGI AA. Also, I think current Nvidia cards have 2xAA for free, and 4x with some hit, but I have no idea how much of a hit.
Yeah, I know cgi has an insane amount of samples but my point is the more it has the closer it looks to cgi imo.
 

Fafalada

Fafracer forever
The only logical place IMO would be in the RSX, where you can reuse the same data pathways that RSX would be using for graphics.
That would involve redesigning GS for new memory interface - in other words you're no longer getting free compatibility lunch. It makes a lot less sense to me - the place where it's far more likely to be is over that 2.5GB/s north bridge connection (which is more then double the bandwith GS needs anyhow).

But if you're going through the trouble of redesigning your chips for backwards compatibility like you suggest, why not just modify RSX for compatibility with GS addressing modes instead.
 

gofreak

GAF's Bob Woodward
dorio said:
Thanks, hadn't read that from Dave. I think ~250 trannies is in line with the transistor counts of the ATI cards being released this year. Do you know the trannie count for the 520?

It's reportedly north of 300m.

Shogmaster said:
Why limit things to only looking at transistor budget? Surely, the money Sony is putting into BR is money not going into more gaming related features/powers?

Of course, but I think their GPU budget is very similar to MS's, so it doesn't really matter when comparing to other systems, in terms of graphics systems.
 
Fafalada said:
That would involve redesigning GS for new memory interface - in other words you're no longer getting free compatibility lunch. It makes a lot less sense to me - the place where it's far more likely to be is over that 2.5GB/s north bridge connection (which is more then double the bandwith GS needs anyhow).

Hmm... Northbridge.... Yeah.... OK.....

But if you're going through the trouble of redesigning your chips for backwards compatibility like you suggest, why not just modify RSX for compatibility with GS addressing modes instead.

That sounds alot more complex/time consuming, with less sure results. Am I way off?
 

dorio

Banned
It would be a shame that they would have to devote 40 million transistors for backwards compatiblity and that's all its able to do. Have they confirmed that bc would be a hardware solution.
 

gofreak

GAF's Bob Woodward
dorio said:
It would be a shame that they would have to devote 40 million transistors for backwards compatiblity and that's all its able to do. Have they confirmed that bc would be a hardware solution.

I'm not sure how likely it is that 40m of RSX's own transistors are being used for GS, as discussed above by Faf and Shog. We'll know for sure soon enough. BC appears to be a mix of software and hardware - the CPU is emulated on Cell, while it appears GS is in there as hardware.
 

TheDuce22

Banned
It seems to me like common sense would dictate that one will not be noticablly more powerfull than the other. Six months is not alot of time, Nvidia and ATI are both very capable.
 

ourumov

Member
Fafalada said:
That would involve redesigning GS for new memory interface - in other words you're no longer getting free compatibility lunch. It makes a lot less sense to me - the place where it's far more likely to be is over that 2.5GB/s north bridge connection (which is more then double the bandwith GS needs anyhow).

But if you're going through the trouble of redesigning your chips for backwards compatibility like you suggest, why not just modify RSX for compatibility with GS addressing modes instead.

Looking back at PSX emulation on PS2 we had the IOP processing games and then sending primitives to SIF which send them to EEcore which transformed them into GS commands.
But then...the GS was enough fast to accomplish whatever speed the PSX's GPU achieved. Now there is a difference in bandwith that I don't see so easy to solve...


The EE can easily be emulated on the SPEs...no doubt about it...You can also transform your results into RSX commands there...and you could even improve rendering passes in order to need les memory bandwith althought I am not sure on the feasibility of it...
 

Elios83

Member
http://ps3.ign.com/articles/624/624605p1.html

"The seven SPEs (Synergistic Processor Element) of Cell can be used for graphics," reveals Kutaragi. "In fact, many of the E3 demos were made without a graphics chip, with only Cell used for all graphics. However, this means of use is wasteful."

Kutaragi reveals that there was once the idea of using two Cell chips in the PS3, with one used as the CPU and the other used for graphics. However, this idea was killed when it was realized that Cell isn't appropriate for the functionality required for shaders.


Referring to the means of backwards compatibility used in the PS3, Kutaragi reveals, "We use a combination of hardware and software."

"With the Xbox next generation coming in November of this year, the current Xbox will become last generation. With that, the Xbox will kill itself. The only way to save it is to have 100% backwards compatibility from the first day. However, it seems that [Microsoft] cannot make that commitment -- on a technology level, it's difficult."
 

dorio

Banned
Kleegamefan said:
he hints that some hardware solutions were required in order to max out compatibility due to the fact that some PS2 games do things with the hardware that are not theoretically possible.
I wonder what that means.
 

Fafalada

Fafracer forever
ourumov said:
But then...the GS was enough fast to accomplish whatever speed the PSX's GPU achieved. Now there is a difference in bandwith that I don't see so easy to solve...
Well, to be fair, during normal rendering GS memory access patterns tend to be cache friendly(all the more as games got more optimized), I have no doubt whatsoever that RSX has more then enough bandwith to sustain such rendering and more.
The tricky part is when we get to the more exotic render-to-texture shenanigans and stuff, where things can be(and have been on PS2)done that can potentially bring any architecture without unified eDram to its knees (including those that use eDram for framebuffers).
There's also the issue of emulating GS addressing patterns (behaviour of which pretty much any half-decent PS2 game (ab)uses to various extents), but that 'can' be done through shader math.

I would like to see RSX have a go at this if for no other reason to see how they tackled various challenges mentioned ;) but I'm not gonna hold my breath for it.
 

Kleegamefan

K. LEE GAIDEN
rsxarchitecture.jpg


Does anyone have a clearer picture of this RSX slide...perhaps that could give us some more inshight to the chip??
 

Vince

Banned
Fafalada said:
The tricky part is when we get to the more exotic render-to-texture shenanigans and stuff

Hey Farva what's the name of that restaurant you like with all the goofy stuff on the wall and the mozzarella sticks? Sorry, It had to be done.

PS. Klee, that's just a conceptual diagram, it's not reflective of an architectural implimentation AFAIK.
 

TTP

Have a fun! Enjoy!
Kleegamefan said:
rsxarchitecture.jpg


Does anyone have a clearer picture of this RSX slide...perhaps that could give us some more inshight to the chip??

I dont think there is anything tremendously revealing in there but here you go ;)

 

Wunderchu

Member
Pimpwerx said:
A lot has been made of the Xenos' abilities as a GPGPU. Looks like RSX was designed along the same lines too? This based on this machine translation. I hope the one can get us a proper translation today. This has been the most interesting part of the interview IMO. PEACE.
this part 3 is the most interesting part of this interview to me, as well
 
Top Bottom