• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD to sell a cut down version of Sony's PS4 APU

Durante

Member
PS4 is a fucking beast, teraflops don't mean shit when it comes to coding for a closed, windows OS-free system.
Actually, teraflops mean exactly as much on a closed, "windows OS-free" (as if that made any significant difference) system as they do on an open one. Their meaning is pretty clearly defined.

Yes and it should be obvious when you look what is possible on PS3. (And the fact that mega powerful PCs still have crappy frame rates on the latest games, often.)
This is 100% grade A bullshit. Every PS360 port runs much better at the same settings on a moderately competent gaming PC than it does on the consoles.

So are you saying HSA is dated in comparison with PCIe?
Why do I get the feeling that you only have the vaguest of ideas of what the terms you are using mean?

Ohhhh. How can that be better than normal-sized graphics cards?
In terms of overall performance, it can not.
 

TheD

The Detective
Your post is just an example of how deluded and arrogant some PC owners are when it comes to their false sense of superiority, in many ways. The post I originally replied to was an example of how PC owners are just plain wrong, often,when they compare what is capable on PC and what is capable with the same, or less, FLOPS, on console.
You will see on PS4 that properly programmed games will achieve good framerates no matter the resolution. PC games never get the most out of their hardware superiority, consoles nearly always do. Do you understand any of this?

Your post goes to show how insecure console owners can be when faced with the fact that the new console will not as high end at it's time of release as past consoles.
 

mrgreen

Banned
You didn't answer any of my questions.

No I didn't because they are obvious and don't need answering. The PC owners who have largely posted on this thread have been wrong in basing their facts on PC and not what is relevent or possible on console...
 

mrgreen

Banned
Actually, teraflops mean exactly as much on a closed, "windows OS-free" (as if that made any significant difference) system as they do on an open one. Their meaning is pretty clearly defined.

This is 100% grade A bullshit. Every PS360 port runs much better at the same settings on a moderately competent gaming PC than it does on the consoles.

We have already heard from programmers that teraflops on console do not mean the same things as on PC when it comes to what they will be able to do. Sorry but this is what I believe and I don't respect you, or believe what you say.

And I would bet you a million pounds that a PC with the same specs as the PS3 could not run the PS3 exclusives better than on the PS3.
 

diaspora

Member
We have already heard from programmers that teraflops on console do not mean the same things as on PC when it comes to what they will be able to do. Sorry but this is what I believe and I don't respect you, or believe what you say.

Well no, both he and other have already shot this notion down. All you're doing at this point is shouting "no u".
 

TheD

The Detective
No I didn't because they are obvious and don't need answering. The PC owners who have largely posted on this thread have been wrong in basing their facts on PC and not what is relevent or possible on console...

They do need answering,
e.g the first one asks why this thread does not have anything to do PC gaming when the OP is clearly about AMD selling APUs for the PC market!\

We have already heard from programmers that teraflops on console do not mean the same things as on PC when it comes to what they will be able to do. Sorry but this is what I believe and I don't respect you, or believe what you say.

And I would bet you a million pounds that a PC with the same specs as the PS3 could not run the PS3 exclusives better than on the PS3.

Durante is a programmer with a large amount of knowledge on the subject at hand.
You should respect what he has to say!
 

mrgreen

Banned
They do need answering,
e.g the first one asks why this thread does not have anything to do PC gaming when the OP is clearly about AMD selling APUs for the PC market!\



Durante is a programmer!
You will respect what he has to say!

Yeah I shouldn't have said it had nothing to do with PC owners. I was refering to other threads. And no I won't if I don't agree with him.
 

TheD

The Detective
Yeah I shouldn't have said it had nothing to do with PC owners. I was refering to other threads. And no I won't if I don't agree with him.

Durante is by far the most knowledgeable person in this thread on this subject, thus he should be respected.
 

mrgreen

Banned
Durante is by far the most knowledgeable person in this thread on this subject, thus he should be respected.

Listen, he doesn't give much respect in the way he responds to people's posts. If he did I would respect him. And no offence, but you should mind your own business and not be a suck up.
 

diaspora

Member
We have already heard from programmers that teraflops on console do not mean the same things as on PC when it comes to what they will be able to do. Sorry but this is what I believe and I don't respect you, or believe what you say.

Actually, teraflops mean exactly as much on a closed, "windows OS-free" (as if that made any significant difference) system as they do on an open one. Their meaning is pretty clearly defined.

This is 100% grade A bullshit. Every PS360 port runs much better at the same settings on a moderately competent gaming PC than it does on the consoles.

no I won't if I don't agree with him.

Haaaaaa.

In any case, I'm not seeing the need to upgrade my 4ghz Phenom II x4. Getting an SSD would probably give me a better boost more than this new... thing AMD's releasing.
 

TheD

The Detective
Listen, he doesn't give much respect in the way he responds to people's posts. If he did I would respect him. And no offence, but you should mind your own business and not be a suck up.

What, you don't like how he responds to fanboyish, flamebaiting BS?

The person that should be minding his own business is you, you have added nothing to this thread and clearly do not have a sound understanding of what is being talked about.
 

Durante

Member
Listen, he doesn't give much respect in the way he responds to people's posts.
I usually try to match or exceed the level of discourse already in place. I'd be happy to have a meaningful discussion on any of this if you offer anything in terms of substantial arguments other than how sick you are of deluded and arrogant PC gamers.

(Edit: and this has nothing to do with respecting me or not respecting me, or who I am, it's perfectly possible to discuss hardware simply based on facts. That's the best thing about it)
 

spwolf

Member
This the big news here in my opinion (*), Sony doesn't own the IP. Similar situation as with Xbox/NVIDIA, the exact thing Microsoft avoided with 360 by staying in control of the IP - that's the main reason they have been able to do such impressive cost reductions.

Hopefully Sony has done some watertight T&Cs so they don't get screwed.





* Although I would love some of that APU bandwidth goodness in my gaming PC - Titan level GPU combined with i7 level CPU on a single mammoth die!

well they already said they will release toned down version - plus slow cpu big gpu wont work on Windows PC anyway... same problems as always, it will have to be targeted specifically to get the best out of it.

big SoC apu's will never work in pc world because people who buy expensive cpu's also want to upgrade their graphic cards frequently... I see it working ok on laptops, but again for Windows PC, people will be dissapointed with jaguar cores.
 
Why do I get the feeling that you only have the vaguest of ideas of what the terms you are using mean?

How about addressing my question? This is a post from a knowledgeable user on B3d:

If the PS4 or XB720 leverage shared computation between the CPU and GPU to significant effect, there are cases where PC setups can suffer if they run into PCIe latency/bandwidth restrictions. Discrete products may also lag behind the consoles in terms of shared memory space, compared to consoles that will have it at the outset.

Why not stuck an Orbis-like chip on a graphics board and call it a new generation? Even a few cores could, with the help of the driver or HSA runtime, actually make some of the general GPU processing workloads that are dominated by copy and PCIe overhead reasonable to use.

http://beyond3d.com/showpost.php?p=1711211&postcount=16
 

onQ123

Member
Quoted from Beyond3D



Silenti said:
New info: just from a poster at Arstechnica that stated last night one of his sources was going to have their NDA expire overnight. Take it with a grain or a truckload of salt according to your own predilections.

http://arstechnica.com/civis/viewtopic.php?f=22&t=1193497&start=440
Poster Blacken00100
"So, a couple of random things I've learned:

-It's not stock x86; there are eight very wide vector engines and some other changes. It's not going to be completely trivial to retarget to it, but it should shut up the morons who were hyperventilating at "OMG! 1.6 JIGGAHURTZ!".

-The memory structure is unified, but weird; it's not like the GPU can just grab arbitrary memory like some people were thinking (rather, it can, but it's slow). They're incorporating another type of shader that can basically read from a ring buffer (supplied in a streaming fashion by the CPU) and write to an output buffer. I don't have all the details, but it seems interesting.
"

Sounds like they sneaked them SPEs into the Jaguar cores.
 

artist

Banned
I think you are wrong. Here's why:

http://forum.teamxbox.com/showthread.php?t=381205

Last time the Xbox had a GPU that was comparable to a $400 PC GPU. This time the PS4 has a GPU that is comparable to a $150-200 GPU. That's without taking into account that both the PS3 and Xbox 360 had multi-core CPUs, a rarity among PCs of the time.
"At lower resolutions say 640x480"

500px-ha_ha_ha_oh_wowu0e1n.jpg


Lets just do some simple comparison

Xenos
Pixel Fill: 4000 GP/s
Texel Fill: 8000 GT/s
22.6GB/s mem bandwidth

R580 ($549)
Pixel Fill: 10000GP/s
Texel Fill: 10000GT/s
48GB/s mem bandwidth

6800GT ($199)
Pixel Fill: 5600GP/s
Texel Fill: 5600GT/s
32GB/s mem bandwidth

Woosh.

Ofcourse the Xenos had USA but the PS4 GPU arch also is unlike any GCN GPU we have right now. In terms of sheer raw power, the gap is similar despite what people seem to generally think.
 

Eideka

Banned
Your post is just an example of how deluded and arrogant some PC owners are when it comes to their false sense of superiority, in many ways. The post I originally replied to was an example of how PC owners are just plain wrong, often,when they compare what is capable on PC and what is capable with the same, or less, FLOPS, on console.
You will see on PS4 that properly programmed games will achieve good framerates no matter the resolution. PC games never get the most out of their hardware superiority, consoles nearly always do. Do you understand any of this?

So much salt. You seem gutted that some people dare to think they did good by gaming on PC.

PC games don't need to take full advantage of 450€ hardware to look way better than a console version.

Same thing will happen next-gen, but at similar specs a console will produce better results for a cheaper price.

Hopefully,high end PC hardware is already ahead so this shouldn't be an issue.
 

Cidd

Member
Not sure if anyone can answer this, I've read in a few post that GDDR5 have very high bandwidth which is good for GPUs but really high latency which is bad for CPUs. how would high latency affect the performance of the CPU?

How will Sony be able to counter the latency problem?
 

onQ123

Member
Not sure if anyone can answer this, I've read in a few post that GDDR5 have very high bandwidth which is good for GPUs but really high latency which is bad for CPUs. how would high latency affect the performance of the CPU?

How will Sony be able to counter the latency problem?

There is no Latency Problem this latency talk is just a counter to the fact that the PS4 will have fast GDDR5 memory so people attacked the weak point for massive damage!


Did the Xbox 360 CPU have any problems because of GDDR3?
 

szaromir

Banned
"At lower resolutions say 640x480"

500px-ha_ha_ha_oh_wowu0e1n.jpg


Lets just do some simple comparison

Xenos
Pixel Fill: 4000 GP/s
Texel Fill: 8000 GT/s
22.6GB/s mem bandwidth

R580 ($549)
Pixel Fill: 10000GP/s
Texel Fill: 10000GT/s
48GB/s mem bandwidth

6800GT ($199)
Pixel Fill: 5600GP/s
Texel Fill: 5600GT/s
32GB/s mem bandwidth

Woosh.

Ofcourse the Xenos had USA but the PS4 GPU arch also is unlike any GCN GPU we have right now. In terms of sheer raw power, the gap is similar despite what people seem to generally think.
Xenos had EDRAM, no point in ignoring it. I'm also fairly sure Xenos murdered 6800GT without any hassle, I bought 7900GT in 2006 and performance on my PC and on my console was very similar, depending on the game.
 
There is no Latency Problem this latency talk is just a counter to the fact that the PS4 will have fast GDDR5 memory so people attacked the weak point for massive damage!


Did the Xbox 360 CPU have any problems because of GDDR3?

Wrong. GDDR5 has much more latency than DDR3. This isnt opinion. It's a fact. CPUs typically operate on many small sets of data not whole contiguous segments of data. GPUs operate on huge volumes of data (several hundreds of millions of polygons ). These are facts. Operations on many small sets of data will be bottlenecked if a noticeable amount of latency happens on each operation. Large data moves will be bottlenecked by bandwidth. You're typically not doing as many reads and writes with graphics data so latency does not cause issues, but general applications do a whole lot of reads and writes to memory so latency will cause issues.
 

onQ123

Member
Wrong. GDDR5 has much more latency than DDR3. This isnt opinion. It's a fact. CPUs typically operate on many small sets of data not whole contiguous segments of data. GPUs operate on huge volumes of data (several hundreds of millions of polygons ). These are facts. Operations on many small sets of data will be bottlenecked if a noticeable amount of latency happens on each operation. Large data moves will be bottlenecked by bandwidth. You're typically not doing as many reads and writes with graphics data so latency does not cause issues, but general applications do a whole lot of reads and writes to memory so latency will cause issues.

Like I said this is not a problem Xbox 360 CPU survived with GDDR3 & PS4 CPU will survive with GDDR5 , The PS4 is a console not a PC.
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
Wrong. GDDR5 has much more latency than DDR3. This isnt opinion. It's a fact. CPUs typically operate on many small sets of data not whole contiguous segments of data. GPUs operate on huge volumes of data (several hundreds of millions of polygons ). These are facts. Operations on many small sets of data will be bottlenecked if a noticeable amount of latency happens on each operation. Large data moves will be bottlenecked by bandwidth. You're typically not doing as many reads and writes with graphics data so latency does not cause issues, but general applications do a whole lot of reads and writes to memory so latency will cause issues.

I know this is the new meme, but I'd like to see some numbers if it is such a worry. There also many "facts" you are ignoring. The AMD APU line is designed to work with both DDR and GDDR as system memory with multiple levels of cache.

The only real world benchmark I can find that seem relevant is last gen APU (A6-3650) with DDR3 and a GPU (HD6850) with GDDR5.

gp_lat_global.png

http://www.sisoftware.net/?d=qa&f=gpu_mem_latency

I'm sure the numbers are better now, but I don't see the huge discrepancy for GDDR5 and DDR3 for over all latency.
 

mr2xxx

Banned
Actually, teraflops mean exactly as much on a closed, "windows OS-free" (as if that made any significant difference) system as they do on an open one. Their meaning is pretty clearly defined.

While teraflops=teraflops, it isn't the whole picture. I mean look at driver issues that AMD or Nvidia have. Its not uncommon that a new driver will increase performance for certain games and in performance in general. While consoles don't have this issue which means that they are as efficient as possible. Due to that you can state that a console GPU is more efficient than its PC counterpart can you not?

Also you have the AMD 7970 with 3.7 TFLOPS and 680 GTX has about 2.5 TFLOPS but the 680 GTX is on par performance wise with the 7970. Would driver issues be the main reason why AMD doesn't mop the floor with the Nvdia? Or is it maybe due to architecture changes? Like how the PS4 GPU= AMD 7850 Tflops wise but since it has better architecture it will perform much better.
 

artist

Banned
Xenos had EDRAM, no point in ignoring it. I'm also fairly sure Xenos murdered 6800GT without any hassle, I bought 7900GT in 2006 and performance on my PC and on my console was very similar, depending on the game.
Sure, but the PS4 GPU has access to a unified 8GB GDDR5 pool. When will that feature make it's way to PCs? Some time in 2014?

When we're comparing PC to console GPUs, obviously there are going to be some differences. The point here is that in terms of sheer power what the PS4 has isnt a different situation from last generation.
 

Perkel

Banned
We have already heard from programmers that teraflops on console do not mean the same things as on PC when it comes to what they will be able to do. Sorry but this is what I believe and I don't respect you, or believe what you say.

And I would bet you a million pounds that a PC with the same specs as the PS3 could not run the PS3 exclusives better than on the PS3.

teraflop=teraflop. What you want to say is efficiency
 

mavs

Member
I know this is the new meme, but I'd like to see some numbers if it is such a worry. There also many "facts" you are ignoring. The AMD APU line is designed to work with both DDR and GDDR as system memory with multiple levels of cache.

The only real world benchmark I can find that seem relevant is last gen APU (A6-3650) with DDR3 and a GPU (HD6850) with GDDR5.

http://www.sisoftware.net/images/gp_lat_global.png[IMG]
[url]http://www.sisoftware.net/?d=qa&f=gpu_mem_latency[/url]

I'm sure the numbers are better now, but I don't see the huge discrepancy for GDDR5 and DDR3 for over all latency.[/QUOTE]

You'll notice that chart is showing latency in clock cycles. If it was showing latency in units of time you would see a huge discrepancy - in favor of GDDR5. Anyway even this chart doesn't mean much. Llano was AMD's first stab at an APU, and as a company their track record doesn't suggest they are cache wizards.
 
I know this is the new meme, but I'd like to see some numbers if it is such a worry. There also many "facts" you are ignoring. The AMD APU line is designed to work with both DDR and GDDR as system memory with multiple levels of cache.

The only real world benchmark I can find that seem relevant is last gen APU (A6-3650) with DDR3 and a GPU (HD6850) with GDDR5.

gp_lat_global.png

http://www.sisoftware.net/?d=qa&f=gpu_mem_latency

I'm sure the numbers are better now, but I don't see the huge discrepancy for GDDR5 and DDR3 for over all latency.

Look at 1kb, 2kb, etc. Now, multiply that hundreds of thousands if not millions of read and write operations and look at intel's HD line which is pure CPU (Read uses nothing but DDR3)
 
While teraflops=teraflops, it isn't the whole picture. I mean look at driver issues that AMD or Nvidia have. Its not uncommon that a new driver will increase performance for certain games and in performance in general. While consoles don't have this issue which means that they are as efficient as possible. Due to that you can state that a console GPU is more efficient than its PC counterpart can you not?

Also you have the AMD 7970 with 3.7 TFLOPS and 680 GTX has about 2.5 TFLOPS but the 680 GTX is on par performance wise with the 7970. Would driver issues be the main reason why AMD doesn't mop the floor with the Nvdia? Or is it maybe due to architecture changes? Like how the PS4 GPU= AMD 7850 Tflops wise but since it has better architecture it will perform much better.

AMD parts are better in raw compute tasks, which is what the TFLOPS shit is more realistically representing, though nVidia is more widely used for that sort of stuff because of CUDA, which AMD has no answer for(enter HSA, sort of). AMD is banking on HSA because they are NEVER going to catch Intel in CPU performance(especially per watt).

These APUs will be for the mobile/low-end market. AMD already has plans for Kaveri with the same feature set launching later this year for typical desktop usage. The PS4 had nothing to do(aside from funding maybe) with the launch of these products as they were already expected to come out in some form(less cores/clock speed etc..).
 

Lkr

Member
The article is implying that AMD APU with 12-18CU graphics engine will be available for the desktop space as well. But the marketing speak is mixing up, this is Sony tech but what we will sell is our tech bumble jumble.


Nope ..

http://www.neogaf.com/forum/showpost.php?p=48145238&postcount=192
Hahaohwow

X1950XTX was the best card on the market at the time. I can't remember if it was a x1800 with more unlocked or not, but the card released too late to have even been considered for the 360 fwiw. 8800 came out the next year and was more money than a 360 and a few games
 

mavs

Member
Look at 1kb, 2kb, etc. Now, multiply that hundreds of thousands if not millions of read and write operations and look at intel's HD line which is pure CPU (Read uses nothing but DDR3)

1kb, 2kb represent mostly cache accesses. Look at the right end of the graph and you see that direct access to DDR3 is not very fast even with Nvidia graphics.

Look at the blue and red lines. Blue is AMD APU using shared DDR3, like Intel HD or durango (not exactly like durango, but closer than CPU+dedicated GPU.) Red is AMD discrete GPU using on-board GDDR5. The results for AMD are far worse than Intel or even Nvidia here.

Notice that it is measured in clocks, and the APU has a much lower clock frequency than the GPU. Measured in (nano)seconds the latency APU is more than 50% longer. Judging by the results for Intel this seems like an AMD problem.

All these products are at least two years old so these numbers don't tell us much about upcoming consoles.
 
I think its smart marketing. Just look at neogaf for example, the lack of PC tech around here is pretty disappointing and no one cares to educate themselves. They see numbers they dont even understand and they go apeshit over it. Good move by AMD, make that money.
 

Reallink

Member
On a related subject, would it have cost Nintendo more or less to go with a similar APU (given the differences in release timing, lets say 4-8 core Bobcat and 600-800gF GPU) than the apparently oddball shit they actually went with? I think one of the Chipworks analysts was suggesting the U's APU could be upwards of $70, $80, or $90. The PS4 APU seems like it's going to be in same cost ballpark.
 
So apart from The blu ray drive everything in the ps4 is third party IP? Good going sony. No wonder the electronics buisness is shrinking when you jettison every single part of your console to a third party manufacturer
 
On a related subject, would it have cost Nintendo more or less to go with a similar APU (given the differences in release timing, lets say 4-8 core Bobcat and 600-800gF GPU) than the apparently oddball shit they actually went with? I think one of the Chipworks analysts was suggesting the U's APU could be upwards of $70, $80, or $90. The PS4 APU seems like it's going to be in same cost ballpark.

The PS4 APU is going to be far and away more expensive, but there are APUs you can buy out now that trump any of the old consoles (including WU) for $70-90 already so...

They eat more wattage too.
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
Look at 1kb, 2kb, etc. Now, multiply that hundreds of thousands if not millions of read and write operations and look at intel's HD line which is pure CPU (Read uses nothing but DDR3)

I think you are reading it wrong. Look at the two relevant lines (not all of them). The absolute latency is not very important, just relative. It is apples to oranges, but there is no APU system with GDDR5 yet, so this close as it gets right now.
 

onQ123

Member
So apart from The blu ray drive everything in the ps4 is third party IP? Good going sony. No wonder the electronics buisness is shrinking when you jettison every single part of your console to a third party manufacturer

do you think Sony made the PS1 ,PS2 & PS3 CPUs/GPUs by themselves?
 

TheD

The Detective
"At lower resolutions say 640x480"

500px-ha_ha_ha_oh_wowu0e1n.jpg


Lets just do some simple comparison

Xenos
Pixel Fill: 4000 GP/s
Texel Fill: 8000 GT/s
22.6GB/s mem bandwidth

R580 ($549)
Pixel Fill: 10000GP/s
Texel Fill: 10000GT/s
48GB/s mem bandwidth

6800GT ($199)
Pixel Fill: 5600GP/s
Texel Fill: 5600GT/s
32GB/s mem bandwidth

Woosh.

Ofcourse the Xenos had USA but the PS4 GPU arch also is unlike any GCN GPU we have right now. In terms of sheer raw power, the gap is similar despite what people seem to generally think.

Fail post.

You ignore that the post that was linked said "640x480 or HDTV then the xbox is most likely faster due to it's slightly higher shader power and high framebuffer bandwidth, But at higher resolutions 1600x1200 it could possible go to the X1K. They both have similar performance. In the end it comes down to the type of application.", not just "At lower resolutions say 640x480"!

That is clearly biased cherry picking!


Also the R520 came out just before the 360, the R580 came after and you are only comparing TMU and ROPs, when you look at MOps and MVertices it is a whole other story.

Like I said this is not a problem Xbox 360 CPU survived with GDDR3 & PS4 CPU will survive with GDDR5 , The PS4 is a console not a PC.

The memory latency on the 360 hurt performance a lot.

I know this is the new meme, but I'd like to see some numbers if it is such a worry. There also many "facts" you are ignoring. The AMD APU line is designed to work with both DDR and GDDR as system memory with multiple levels of cache.

The only real world benchmark I can find that seem relevant is last gen APU (A6-3650) with DDR3 and a GPU (HD6850) with GDDR5.

gp_lat_global.png

http://www.sisoftware.net/?d=qa&f=gpu_mem_latency

I'm sure the numbers are better now, but I don't see the huge discrepancy for GDDR5 and DDR3 for over all latency.

That graph counts in clock cycles, the processors being compared do not run at the same clock speed!

And you can clearly see that the Intel Ivy Bridge APU with DDR3 in that test has much lower latency than the processors with GDDR.
 

artist

Banned
Fail post.

You ignore that the post that was linked said "640x480 or HDTV then the xbox is most likely faster due to it's slightly higher
shader power and high framebuffer bandwidth", not just "At lower resolutions say 640x480"!

The R520 came out just before the 360, the R580 came after.
And you are only comparing TMU and ROPs, when you look at MOps and MVertices it is a whole other story.
The engineer himself is saying that the Xenos would be faster at corner cases. Pretty sure 480p/768p were not cutting edge PC gaming resolutions in 2005. There will be corner case wins that the PS4/Next Xbox would be better at, doesnt make them better than their PC counterparts. Besides, asking AMD about their new product in a new console, the answer you get probably wont be entirely objective.

MOps is still inferior to R520 and slightly better than the $199 RV530. Not sure what you are on about ..

edit: I see now;

That graph counts in clock cycles, the processors being compared do not run at the same clock speed!

And you can clearly see that the Intel Ivy Bridge APU with DDR3 in that test has much lower latency than the processors with GDDR.
Read the article
 

TheD

The Detective
The engineer himself is saying that the Xenos would be faster at corner cases. Pretty sure 480p/768p were not cutting edge PC gaming resolutions in 2005. There will be corner case wins that the PS4/Next Xbox would be better at, doesnt make them better than their PC counterparts. Besides, asking AMD about their new product in a new console, the answer you get probably wont be entirely objective.

MOps is still inferior to R520 and slightly better than the $199 RV530. MVertices is still weaker than the RV530 let alone R520. Not sure what you are on about ..

edit: I see now;


Read the article

The engineer clearly says that at the res the 360 runs at, it is most likely faster.

Not a corner case!

96000 is greater than 10000 or 31200 and 6000 is greater than 1250 or 1300!

No only did I read the article, I ran the benchmark!

It is clear that DDR3 connected to an Ivy Bridge CPU has much lower latency than any of the GDDR GPUs.
 
Top Bottom