In floating point operations, absolutely. In general computing no. Cell and Xenon were not very good in that regard. Deep pipeline and all thatIsn't this CPU weaker than the CELL Processor? If true, that's pathetic.
In floating point operations, absolutely. In general computing no. Cell and Xenon were not very good in that regard. Deep pipeline and all thatIsn't this CPU weaker than the CELL Processor? If true, that's pathetic.
Isn't this CPU weaker than the CELL Processor? If true, that's pathetic.
Wait till E3. They could have double pumped fpus too: http://forum.beyond3d.com/showpost.php?p=1711064&postcount=746Isn't this CPU weaker than the CELL Processor? If true, that's pathetic.
We've had developers developing games on the Cell for almost 7 years now...Not sure, but it is MUCH easier to develop for and get more out of it then the CELL processor.
AMDs new Asynchronous Compute Engines serve as the command processors for compute operations on GCN. The principal purpose of ACEs will be to accept work and to dispatch it off to the CUs for processing. As GCN is designed to concurrently work on several tasks, there can be multiple ACEs on a GPU, with the ACEs deciding on resource allocation, context switching, and task priority. AMD has not established an immediate relationship between ACEs and the number of tasks that can be worked on concurrently, so were not sure whether theres a fixed 1:X relationship or whether its simply more efficient for the purposes of working on many tasks in parallel to have more ACEs.
One effect of having the ACEs is that GCN has a limited ability to execute tasks out of order. As we mentioned previously GCN is an in-order architecture, and the instruction stream on a wavefront cannot be reodered. However the ACEs can prioritize and reprioritize tasks, allowing tasks to be completed in a different order than theyre received. This allows GCN to free up the resources those tasks were using as early as possible rather than having the task consuming resources for an extended period of time in a nearly-finished state. This is not significantly different from how modern in-order CPUs (Atom, ARM A8, etc) handle multi-tasking.
from http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute/5
So, allows to allocate instructions at your will, compute or rendering ones, and the order to be executed.Fine tuning this order manually you could avoid stalls in the alus and improve their efficiency.PS4'S CERNY SAUCE.
In floating point operations, absolutely. In general computing no. Cell and Xenon were not very good in that regard. Deep pipeline and all that
We've had developers developing games on the Cell for almost 7 years now...
It's quite sad that we are taking a backwards step with the CPU.
Cache rules everything around me
Xbox 360 already had six hardware threads. (2x each core, unless you were silly and used XNA, then you only got access to four)
Problem is telemetry will pretty much show a majority of PCs out there are on 2 cores or less. Even the Steam HW survey can't break >50% with their 4-core userbase. So PC devs are content in having shitty multicore support.
I'd like to put my 8-core bulldozer to some real work)even though it's technically actually 8 half-cores. Fuck you AMD, my next computer is an i7
Cache rules everything around me
Judging by the way Cerny was apologising for the Cell, I would make an educated guess that developers moaned to Sony about it.
Cache rules everything around me
360 cores were PPC with deep integer pipelines, no branch prediction, and heavy floating point resources. They were more often used to beef up the graphics since they weren't really good at the kind of logic that would run on a PC CPU.
PS4/Durango are standard x86-64 which will translate over to PC much more nicely, and lots of people have 8 threads available these days.
No branch predictor in PPC?.
Not likely.
Well just found this about the gpu:
I'm hearing that PS4 GCN has 8 ACE's, each capable of running 8 CL's each. I believe Tahiti is 2 per ACE, 2 ACE's.
http://forum.beyond3d.com/showpost.php?p=1712819&postcount=1188
This info,appears also in the sea islans gpu series isa document leaked last week, but refering to an "upcoming device".So gpu seems sea islands but with many more ACEs.This would debunk the 14+4 division as with so many aces and command ques you could divide the cus type of execution threads at your will between rendering/compute tasks.
AMDs new Asynchronous Compute Engines serve as the command processors for compute operations on GCN. The principal purpose of ACEs will be to accept work and to dispatch it off to the CUs for processing. As GCN is designed to concurrently work on several tasks, there can be multiple ACEs on a GPU, with the ACEs deciding on resource allocation, context switching, and task priority. AMD has not established an immediate relationship between ACEs and the number of tasks that can be worked on concurrently, so were not sure whether theres a fixed 1:X relationship or whether its simply more efficient for the purposes of working on many tasks in parallel to have more ACEs.
One effect of having the ACEs is that GCN has a limited ability to execute tasks out of order. As we mentioned previously GCN is an in-order architecture, and the instruction stream on a wavefront cannot be reodered. However the ACEs can prioritize and reprioritize tasks, allowing tasks to be completed in a different order than theyre received. This allows GCN to free up the resources those tasks were using as early as possible rather than having the task consuming resources for an extended period of time in a nearly-finished state. This is not significantly different from how modern in-order CPUs (Atom, ARM A8, etc) handle multi-tasking.
from http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute/5
So, allows to allocate instructions at your will, compute or rendering ones, and the order to be executed.Fine tuning this order manually you could avoid stalls in the alus and improve their efficiency.PS4'S CERNY SAUCE.
How can 8 jaguards be weak? Do you really think you could stand against 8 jaguars?
How can 8 jaguards be weak? Do you really think you could stand against 8 jaguars?
Wait no, Cell and Xenon have simple branch predictors but don't support out of order execution
My bad, but point still stands
Stealth GPGPU thread
Can somebody translate the power of this CPu on a scale of 1 to 8 GDDR3 RAM?
We've had developers developing games on the Cell for almost 7 years now...
It's quite sad that we are taking a backwards step with the CPU.
Modern GPU's have great GPGPU capabilities so it makes sense to drop the CELL with its great floating point performance and poor general purpose performance in favor of a more balanced design.
interesting but all 3 have taken the same approach to their hardware this time round just on different levels. these Console(720,PS4,WiiU) are built to stress the GPU/Ram while less on the CPU.
It seems to me the CPU only needs to do so much. Also, it depends on what is being bottlenecked I suppose, such as maybe the physics/animation or the raw pixel pushing graphics effects.So is CPU gonna be the bottleneck this time for PS4?
Gotta say, I love the new specs related memes popping up around GAF since they lead to stuff like this.
I feel the same.
Unless devs take advantage of this stuff it really doesn't matter what the specs are. PS4 reveal shows this isn't going to happen. Everyone is jizzing over KZ and all it did was go from sub-HD to 1080p. Gamers have sent a message, pretty rehashes = STFU N TAKE MY MONIES
AMDs new Asynchronous Compute Engines serve as the command processors for compute operations on GCN. The principal purpose of ACEs will be to accept work and to dispatch it off to the CUs for processing. As GCN is designed to concurrently work on several tasks, there can be multiple ACEs on a GPU, with the ACEs deciding on resource allocation, context switching, and task priority. AMD has not established an immediate relationship between ACEs and the number of tasks that can be worked on concurrently, so were not sure whether theres a fixed 1:X relationship or whether its simply more efficient for the purposes of working on many tasks in parallel to have more ACEs.
One effect of having the ACEs is that GCN has a limited ability to execute tasks out of order. As we mentioned previously GCN is an in-order architecture, and the instruction stream on a wavefront cannot be reodered. However the ACEs can prioritize and reprioritize tasks, allowing tasks to be completed in a different order than theyre received. This allows GCN to free up the resources those tasks were using as early as possible rather than having the task consuming resources for an extended period of time in a nearly-finished state. This is not significantly different from how modern in-order CPUs (Atom, ARM A8, etc) handle multi-tasking.
from http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute/5
So, allows to allocate instructions at your will, compute or rendering ones, and the order to be executed.Fine tuning this order manually you could avoid stalls in the alus and improve their efficiency.PS4'S CERNY SAUCE.
In floating point operations, absolutely. In general computing no. Cell and Xenon were not very good in that regard. Deep pipeline and all that
So, the Atari Jaguar finally is a 64-bit system. Hells yeah.
This is the first solid info I have seen that Jaguar is 28mn. This is a good thing.
Or 8 of these?
Isn't this CPU weaker than the CELL Processor? If true, that's pathetic.
We've had developers developing games on the Cell for almost 7 years now...
It's quite sad that we are taking a backwards step with the CPU.
LAAAAAWD.Cache rules everything around me
Cache rules everything around me
We've had developers developing games on the Cell for almost 7 years now...
It's quite sad that we are taking a backwards step with the CPU.
People assume the CPU is going to be weak because it's got a relatively low clock speed (What is it, 1.6 Ghz?)
If the architecture is efficient, and it seems to be, then the low clock speed won't be an issue.
No. Let's say 1.8GHz vanilla (2.1GHz / WPC) and 2MB shared L2 per CU. But that's my rumor...
IBM has PowerPC designs with branch predictors
Cell and Xenos were not those designs