The 360 is easy to program for and it does not have x86. I just don't get this love for x86, a chip where a lot of silicon is wasted for the sake of compatibility of PC business software from the 1980s. For me, that kind of chip has no business being in a console. Imagine using smaller (and slower) PPC general purpose cores (like the WiiU/360) and using the saved silicon budget to have some more CU's in the GPU, or if you're feeling adventurous, some SPU units on die for BC/physics/vector processing.As for the Cell 1.5, no matter how well it could work next gen, it will still take longer to produce results on that chip than on a x86 core where developers have far more experience.
The 360 is easy to program for and it does not have x86. I just don't get this love for x86, a chip where a lot of silicon is wasted for the sake of compatibility of PC business software from the 1980s. For me, that kind of chip has no business being in a console. Imagine using smaller (and slower) PPC general purpose cores (like the WiiU/360) and using the saved silicon budget to have some more CU's in the GPU.
Think more about PPU needed to manage SPUs and X86 needed to manage GPGPU. The branching/general purpose ability of both PPU and X86 needed. But AMD has already created a library and infrastructure that uses X86 not PPU. 1PPU4SPU can be added to AMD HSA SoCs to serve special purpose best use case for SPUs; this is the idea behind HSA.Agreed. I think an improved 28cm cell with either 3PPU/8SPUs or 2PPU/12SPUs (add some more mem if the die size remains <200nm'2) would have done the job easily. As it seems, AMD was able to pull all three prties into their camp and Sony even went for the APU thing (this could bite them in the a"" in the end, given the turds AMD currently markets).
The 360 is easy to program for and it does not have x86. I just don't get this love for x86, a chip where a lot of silicon is wasted for the sake of compatibility of PC business software from the 1980s. For me, that kind of chip has no business being in a console. Imagine using smaller (and slower) PPC general purpose cores (like the WiiU/360) and using the saved silicon budget to have some more CU's in the GPU, or if you're feeling adventurous, some SPU units on die for BC/physics/vector processing.
This. There is no need for that. Will it make PC porting to PS4 easy? Yes, but they still have to work through all of its intricacies of the hardware anyway, why not put something in that's... worthwhile? I also feel like every hacker and their mother knowing how to program for x86 would cause loads of fun for Sony and consumers...
German Gameswelt has "learned from realiable sources" that PS4 will be released in Q1 2014.
Translated article here.
Why Google translates "Gameswelt" to "Gamespot" is beyond me, I don't think they are affiliated.
Could be true but so could be any other "reliable" source. Also their PS"x" launched in march argument is pretty weak. Nice flag btw. ;-)German Gameswelt has "learned from realiable sources" that PS4 will be released in Q1 2014.
Translated article here.
Why Google translates "Gameswelt" to "Gamespot" is beyond me, I don't think they are affiliated.
There isn't a gaming computer today that couldn't beat the snot out of the PS3 or 360, so IMO that automatically makes their components worth while. =p
Even before Bulldozer AMD CPUs have been way to power hungry for their performance. After the Athlon XP I never even looked at AMD. Especially in a small console heat us a big issue. I don't want Sony to sacrifice GPU TDP because of an inefficient Steamroller or similar setup which needs too much power. AMD GPUs caught up with Nvidia - CPUs not so much... (in general)Yeah what happened to AMD CPUs? They use to be right up there with Intel, but since Bulldozer I haven't heard a single good thing about them.
German Gameswelt has "learned from realiable sources" that PS4 will be released in Q1 2014.
Translated article here.
Why Google translates "Gameswelt" to "Gamespot" is beyond me, I don't think they are affiliated.
Which might explain the Sony statements that they will wait till processes are in place; a 3-6 month delay till March of 2014 if Sony or Microsoft want's to take advantage of the new processes. 28nm and 2.5D with interposer is available for 2013 but 3D with some critical parts 20nm (like AMD Southbridge quotes at 22nm) will wait for 2014.http://www.eetimes.com/electronics-news/4371752/GlobalFoundries-installs-gear-for-20nm-TSVs said:SAN JOSE GlobalFoundries is installing equipment to make through-silicon vias in its Fab 8 in New York. If all goes well, the company hopes to take production orders in the second half of 2013 for 3-D chip stacks using 20 and 28 nm process technology.
The systems should be in place and qualified by the end of July, with about half of them installed today, McCann said. The company aims to run its first 20 nm test wafers with TSVs in October and have data on packaged chips from its partners by the end of the year.
GlobalFoundries schedule calls for having reliability data in hand early next year. The data will be used to update the companys process design kits so its customers can start their qualification tests in the first half of the year.
If all goes well, first commercial product runs of 20 and 28 nm wafers with TSVs can start in the second half of 2013 and ramp into full production in 2014, McCann said.
German Gameswelt has "learned from realiable sources" that PS4 will be released in Q1 2014.
Translated article here.
Why Google translates "Gameswelt" to "Gamespot" is beyond me, I don't think they are affiliated.
Not sure if serious.
We are comparing 2005 products to... 2012.
That's kind of my point though. Basically no matter what is packed into these consoles, it should be worthwhile for the fact that the gap in time between current and next gen will provide that worthwhile performance increase.
I guess I should ask, if you don't consider x86 architecture worthwhile, then what would you be happy with?
Even before Bulldozer AMD CPUs have been way to power hungry for their performance. After the Athlon XP I never even looked at AMD. Especially in a small console heat us a big issue. I don't want Sony to sacrifice GPU TDP because of an inefficient Steamroller or similar setup which needs too much power. AMD GPUs caught up with Nvidia - CPUs not so much... (in general)
Kabini APU with 9-25W TDP leaving Sony with 175W TDP left for the standalone GPU. 75% moren than for the RSX. Even Kaveri would allow a 165W TDP Southern Island GPU. Can't be right or? Desktop Radeon 7870 or 7950 would "easily" fit that number with some changes and stay in the same TDP region as Cell/RSX.Which is why it's likely they switched to Jaguar cores so they can maintain what their GPU target is.
I'm very shocked that Sony hasn't elected to go with the PowerXCell 8i for the PS4.
Something more attuned to consoles needs? I mean.. sure you want SOME general processing on there, but you won't likely need high Double performance for games...
Is that the cpu in one of IBMs super computers? I don't know if it'll be feasible. I would love if Cell development went on, the libraries are there, devs wouldn't have the problems like they did at the start of the gen.
8-core ARM A15 on a performance optimized process (>3 Ghz) with SPU's as FP units instead of NEON.I guess I should ask, if you don't consider x86 architecture worthwhile, then what would you be happy with?
I'm sorry but what is high double performance?
Also what would you think consoles needs are? There's no reason an AMD CPU can't be as good, if not better, than other possible options out there. Who knows, maybe they can add some vector registers to the CPU if floating point performance is a big concern, it would be useful for physics calculations and such. Though the vast majority of floating point performance is going to come from your GPU anyways.
8-core ARM A15 on a performance optimized process (>3 Ghz) with SPU's as FP units instead of NEON.
"Double" the data type... like double floating point? There is single and double point. General CPU's are high in double point precision while GPUs are higher in single point. Cell is like a stream processor... a GPU, very high single point precision. They can use the Jaguar cores for doubles, and have the GPU do its graphics processing, but I really like the idea of having those 1PPU 4SPE modules that Jeff talk about. Because it allows for low power media processing, backwards compatibility, and dedicated to physics calculations for PS4 games.
Will you give up your 3D stacking wet dream? Consoles are not the playfield for implementing untested, expensive new technolgies. You will NOT see 3D memory stacking in any of the new consoles. If Sony wants to bring their new console early 2014, they will have ordered all the materials by early 2013. This is how mass manufacturing works - otherwise your costs would skyrocket.Which might explain the Sony statements that they will wait till processes are in place; a 3-6 month delay till March of 2014 if Sony or Microsoft want's to take advantage of the new processes. 28nm and 2.5D with interposer is available for 2013 but 3D with some critical parts 20nm (like AMD Southbridge quotes at 22nm) will wait for 2014.
Also think of the address/databus nightmares. Supposedly we have a cpu and gpu in the APU and an external GPU fighting for the adress/data bus and now you want to add another multiprocessor-unit (is there redundancy in this unit?) to this mess? Good luck with that- you probably just designed a $600 console.Jeff is wrong, a 1PPU 4SPU module would not allow BC. Also by including such a module, they would likely have to sacrifice something else in the CPU or GPU area.
With some small "emulation" you can work two of them, as it would be working with the same hardware. Right? You'd just have an extra PPU. Also if they can have it in HSA with the rest of the hardware, latency wouldn't be much of an issue......?(I'm taking a stab at things...)Jeff is wrong, a 1PPU 4SPU module would not allow BC.
CPU is already a jaguar. Rumored 4 cores.. take it down to 2? Down the GPU just a tad since it wont have to worry about doing physics calculations.Also by including such a module, they would likely have to sacrifice something else in the CPU or GPU area.
Consoles are not the playfield for implementing untested, expensive new technolgies.
You mean like Cell and Blu-ray?
They can use the Jaguar cores for doubles, and have the GPU do its graphics processing, but I really like the idea of having those 1PPU 4SPE modules that Jeff talk about. Because it allows for low power media processing, backwards compatibility, and dedicated to physics calculations for PS4 games.
Jeff is wrong, a 1PPU 4SPU module would not allow BC. Also by including such a module, they would likely have to sacrifice something else in the CPU or GPU area.
I hope you realize you repeated sections of your post. It was like scrolling through a whole page of posts just to get to the bottom of yours.
I dont understand why you continue this argument about Cell. Multiple people have told you Cell>>>Xenon. Both 360 and PS3 have there strengths and weaknesses. 360 is its GPU while PS3 is the CPU. This has been known for years now. I dont even understand how this is still debatable. This argument is from 2006.
As far the GDD5 vs GDD3. Obviously 4gb of gddr5 would be accompanied by a 256bit bus as well. The question is whether GDD3 with 6-8gb of RAM and 256bit bus would be better than 4gb of RAM and a 256bit bus. Essentially a lot more RAM vs a lot more bandwidth. Thats the question.
Regarding the memory choices - as a consumer I don't really care about patents and proprietary technology as long as it benefits me either through price, speed, quality. As a engineer my viewpoint is very different but I won't go into that now. Sony worked with Rambus and XDR before IF (and that is the problem) they now have an agreement with XDR2 which is "faster" (probably not cheaper) than GDDR5 and will be available in bulk I don't see why Sony shouldn't go that route.
With 4GB GDDR5 Sony hopefully won't cut cost with a smaller BUS because their engineers would simply waste the GDDR5 advantage. A lot of synthetic benchmarks really profit frome more (graphics) memory bandwith but not so much by a size increase.
I am not expert enough (or maybe a bit lazy too *g*) to calculate the maximum bandwith for DDR3, GDDR5, XDR2 but I guess there is a reason why AMD uses GDDR5 for their top-tier GPUS and as an example my Geforce FX880m only has DDR3...
Wishfull thinking
4GB XDR2 in the PS4 which is 50% faster than GDDR5 with the same power consumption. To minimize heat Sony could adjust clock-speed/voltage of the XDR2 to reach GDDR5 speed with less power.
He has SOME good points. I lost him when he compared the GTX 280 to the Cell. Even more so after he tried to discredit every advantage the Cell has over other architectures.
IIRC (I'm no tech expert) but the cache each SPE had was adequate. It was the memory interface between the XDR and the cell that was the bottleneck. If a extra wide memory I/O were used today, that would resolve the problem (correct me if I'm wrong Jeff).
Amazing how console warriors can turn a thread into shit, isn't it? =p
Not sure the lead free solder was the core issue since the PS3 had to use it as well, no?
Looking at performance on a PC isn't really an accurate way to determine how things would turn out in a console. Different games are programmed to take advantage of different types of CPUs. Some scale well with the number of cores while others do not because of the amount of configurations out there and developers working around a lowest common denominator.
Agreed. I think an improved 28cm cell with either 3PPU/8SPUs or 2PPU/12SPUs (add some more mem if the die size remains <200nm'2) would have done the job easily. As it seems, AMD was able to pull all three prties into their camp and Sony even went for the APU thing (this could bite them in the a"" in the end, given the turds AMD currently markets).
The 360 is easy to program for and it does not have x86. I just don't get this love for x86, a chip where a lot of silicon is wasted for the sake of compatibility of PC business software from the 1980s. For me, that kind of chip has no business being in a console. Imagine using smaller (and slower) PPC general purpose cores (like the WiiU/360) and using the saved silicon budget to have some more CU's in the GPU, or if you're feeling adventurous, some SPU units on die for BC/physics/vector processing.
Yeah what happened to AMD CPUs? They use to be right up there with Intel, but since Bulldozer I haven't heard a single good thing about them.
Which might explain the Sony statements that they will wait till processes are in place; a 3-6 month delay till March of 2014 if Sony or Microsoft want's to take advantage of the new processes. 28nm and 2.5D with interposer is available for 2013 but 3D with some critical parts 20nm (like AMD Southbridge quotes at 22nm) will wait for 2014.
Kabini APU with 9-25W TDP leaving Sony with 175W TDP left for the standalone GPU. 75% moren than for the RSX. Even Kaveri would allow a 165W TDP Southern Island GPU. Can't be right or? Desktop Radeon 7870 or 7950 would "easily" fit that number with some changes and stay in the same TDP region as Cell/RSX.
8-core ARM A15 on a performance optimized process (>3 Ghz) with SPU's as FP units instead of NEON.
Supposedly they were getting new dev kits around E3. I figure theres a few different possibilities why nothings leaked about it yet.
A. New dev kits didnt end up releasing around E3 and were delayed
B. there were no major changes in this revision
C. they were just able to keep a better lid on things this time around/no ones cared to leak anything
D. combination of both B and C.
E. none of the above- then explain why
what do you guys think?
Will you give up your 3D stacking wet dream? Consoles are not the playfield for implementing untested, expensive new technolgies. You will NOT see 3D memory stacking in any of the new consoles. If Sony wants to bring their new console early 2014, they will have ordered all the materials by early 2013. This is how mass manufacturing works - otherwise your costs would skyrocket.
With some small "emulation" you can work two of them, as it would be working with the same hardware. Right? You'd just have an extra PPU. Also if they can have it in HSA with the rest of the hardware, latency wouldn't be much of an issue......?(I'm taking a stab at things...)
CPU is already a jaguar. Rumored 4 cores.. take it down to 2? Down the GPU just a tad since it wont have to worry about doing physics calculations.
You mean like Cell and Blu-ray?
And emotion engine?
Good fucking lord.Infinite budget? Over 500$ again? You will not have 4GB GDDR5 on a 256bus. That's for sure.
http://www.newegg.com/Product/Produ...0006662&isNodeId=1&Description=gtx680&x=0&y=0
That cache isn't meant to be shared. That's why it's local to the SPE's. The main bottleneck was the memory controller into the XDR. They specifically used XDR because the Cell requires extremely fast access to memory... that is the specific reason for that local SPE cache. So the problem isn't with the Cell, it wasn't with the XDR, it was the controller between the two.Cache is not apropiate at SPUs because they are local and not accesible by PPU or any other SPU. That's the main bottleneck and fiasco at current Cell. Some good locking raw numbers at synthetic benchmarks, but unreachable in real world enviroments.
You used GTA4 in one of your posts as to why the PS3 chugged a lot... that game ran on shit even with a really good PC. So no, it's not a good way to judge an engine.PC is accurate at show how game engines works and what they demand.
Good fucking lord.
That cache isn't meant to be shared. That's why it's local to the SPE's. The main bottleneck was the memory controller into the XDR. They specifically used XDR because the Cell requires extremely fast access to memory... that is the specific reason for that local SPE cache. So the problem isn't with the Cell, it wasn't with the XDR, it was the controller between the two.
You used GTA4 in one of your posts as to why the PS3 chugged a lot... that game ran on shit even with a really good PC. So no, it's not a good way to judge an engine.
I'm referring to you posting Newegg prices. Yes, GDDR5 is expensive. Yes, 2 gigs of it is expensive. Yes, 256 bus is expensive. There are many reasons why it's expensive, but don't post a desktop GPU to explain your position for a console one.You have to be kidding to expect GDDR5+256 bit bus in a console.
GDDR5 is meant to be with a 128 bit bus. That way you would have smaller chips and more simple motherboard logic, saving money and silicon budget. And later on it would be easier to shrinks into slim models and fewer ram chips.
Yes... I do know. It's nothing complex. It's the "high way" or better yet, the "traffic lights" that control the data flow to and from the CPU and Memory.I'm not sure you understand how an IMC works.
That engine still ran like crap. And just because an engine is CPU bound doesn't make it inherently good. Which is what sounded like what you were saying (which I may be mistaken).I used GTA as an example of a CPU bound engine. Can move to Souce if you want to. There are other few examples.
dr. apocalipsis said:Cell > Xenon it's not an argument. Cell >>>Xenon either.
It's not as easy at that. Cell is weak at integer level, what is the main task of a CPU. GTAIV lagging and dropping frames is a CPU bound issue, for example. Cell have strenghs outside of the CPU requeriments, and weakness at CPU requeriments. Thats my point. Go to Beyond3D and have a read. Not even the bigger Cell fanboy there would claim such thing as Cell>>>Xenon 'because everyone know'. To me, someone claiming that it's no different to someone thinking PS2 was stronger than NGC.
dr. apocalipsis said:I compared Cell vs Q6600 in a real world FP usage. Some other guy claimed Cell was more efficient than a GTX280 because it was in the table I posted. I replied to that too.
dr. apocalipsis said:Cache is not apropiate at SPUs because they are local and not accesible by PPU or any other SPU. That's the main bottleneck and fiasco at current Cell. Some good looking raw numbers at synthetic benchmarks, but unreachable in real world enviroments.
dr. apocalipsis said:PC is accurate at show how game engines works and what they demand.
dr. apocalipsis said:Cell lacks IPC at PPE. Current atom like level. Then you have to rearrange SPU's subsystem.
Once you did that, you have to match the current Intel/AMD CPU performance, and AMD/Nvidia GPU performance.
And after that, you have to teach developers to program to a whole different paradigma and convince them to invest in that new platform.
And after that you have to take care of manufacturate some exotics chips that only you use, so there will not be anyone more cheapening costs.
I just can't see that as a wise business model.
Its not about instruction set? :Facepalmdr. apocalipsis said:As I said earlier, it's not about instruction set. It's about core architecture. x86 wasting silicon and not being Risc or whatsever is an old cliché. Current CPU just use decoders to do that. CPU's don't work with x86 or PowerPC instructions, but with micro-OPS. Complex instructions like x86 were not a problem since pentium days.
Change Instructions set would have little to no none performance advantage, but would cause a high cost forcing to change every program out there.
dr. apocalipsis said:You need a bare minimum at CPU for feed the GPU. That APUs are just too weak. Some of them are even too weak for it's own integrated GPU. Steamroller it's not ideal, but would not be such a bottleneck for a beefier GPU.
Cache are not appropriate for SPU's? Oh really? So are they supposed to just pull local data from the cloud or?
Explicit DMAs between system memory and the local memory.
Infinite budget? Over 500$ again? You will not have 4GB GDDR5 on a 256bus. That's for sure.
http://www.newegg.com/Product/Produ...0006662&isNodeId=1&Description=gtx680&x=0&y=0
Explicit DMAs between system memory and the local memory.
I'm referring to you posting Newegg prices. Yes, GDDR5 is expensive. Yes, 2 gigs of it is expensive. Yes, 256 bus is expensive. There are many reasons why it's expensive, but don't post a desktop GPU to explain your position for a console one.
Yes... I do know. It's nothing complex. It's the "high way" or better yet, the "traffic lights" that control the data flow to and from the CPU and Memory.
There was the onchip EIB between the Cell components, but what I'm saying (from what I understand, I may be wrong) the FlexIO with the Cell and XDR wasn't fast enough to cater to the speeds that XDR had. Either that or the bus wasn't wide enough on the XDR... I'm not sure which one it was...
That engine still ran like crap. And just because an engine is CPU bound doesn't make it inherently good. Which is what sounded like what you were saying (which I may be mistaken).
What are you talking about? The main task of cpu is integer level? What does this even mean? The main task of a what type of CPU? You do realize that there are alot more fields outside of the Wintel spectrum of computing.
I really don't think you understand what you are ranting about. Integer performance only really matter when there are 2 coordinates, X & Y(2D). You decrease X once you move left one pixel, very discrete. As soon as you add Z your numbers start to float. This is basic stuff, the fact that you seem ignorant to this tells me you are not really qualified to discuss in such detail.
Not even going to get into what you have wrong about GTAIV or CELL V Xenon. Go read some more books
What exactly are you using for "Real World" usage of Cell, if you don't mind me asking?
Cache are not appropriate for SPU's? Oh really? So are they supposed to just pull local data from the cloud or?
Cells IPC isn't something to write home about but until Intel dropped Core, it was par for the course. What exactly do you mean by arrange the SPU subsystem? And what about matching CPU/GPU performance? Performance at what?
And this whole different paradigm is pretty much what is gonna be standard in the very near future. What is parallelization bad too now or something?
Its not about instruction set? :Facepalm
What do you think helps determine core architecture?
HUH?
What on hell?!
Try to run the CPU Queen benchmark on a GPU.
OMFG, just try to run some highly serialized lossless data compression such like .rar on a GPU.
Just... Just... OMFG.
Infinite facepalm.
Any program that will use processing power to obtain some valuable data after an input. Not just a test algorithm used to benchmark.
I was talking about local private caches vs shared pool cache needed for multithreading.
Cell/Xenos IPC is low not by today stardards, but by 2006 ones. Lack of OoO execution, poor prefetch, branch prediction, frontend, etc... Nothing to do with instruction set.
And loled at Cell paradigm as the new standard.
Yes. What the fuck happened there?
Regardless of how correct or incorrect your opinions are, to me, you appear to be overly pretentious in how you are presenting them.
dr. apocalipsis said:What on hell?!
Try to solve the CPU Queen problem on a GPU.
OMFG, just try to run some highly serialized lossless data compression such like .rar on a GPU.
Just... Just... OMFG.
Infinite facepalm.
dr. apocalipsis said:Any program that will use processing power to obtain some valuable data after an input. Not just a test algorithm used to benchmark.
dr. apocalipsis said:I was talking about local private caches vs shared pool cache needed for multithreading.
Which is patently false. Each SPE has a DMA to move data between the SPEs(amongst each other) and to the PPE.you said:Cache is not apropiate at SPUs because they are local and not accesible by PPU or any other SPU. That's the main bottleneck and fiasco at current Cell. Some good looking raw numbers at synthetic benchmarks, but unreachable in real world enviroments.
dr. apocalipsis said:Cell/Xenon IPC is low not by today stardards, but by 2006 ones. Lack of OoO execution, poor prefetch, branch prediction, frontend, etc... Nothing to do with instruction set.
And loled at Cell paradigm as the new standard.
How old are you again?
Anyways,
What does Rar's algorithm performance on GPUs have to do with anything I just said? Or the CPU Queen on GPU's? How about you respond to what I actually posted instead of going on random ass tangents about nothing?
You do know that the number of these types of programs are infinitely high right? For you to make a definite answer either way is dumbfounding.
Each SPE can run only one HW thread. Im not sure what you are going on about? This is what you posted...
Which is patently false. Each SPE has a DMA to move data between the SPEs(amongst each other) and to the PPE.
By 2006 ones do you mean like Pentium 4s and AMD FX's? LOL.
Low IPC high clock pentium 4s were common as hell in that era. It wasn't until Core that Intel flipped the script...
You really should hit the books again, if you have at all.
dr. apocalipsis said:It's pretty difficult to me tro try to argue with someone that just spitted against the usefulness of integer performance at CPU. Pretty embarrasing to try to explain to someone who calls me ignorant the absolute need of serial processing at CPU and how it's the paradigm of current computational system.
Yeah I do. But um what does this have to do you claiming that a Q6600 would best a Cell at these programs?dr. apocalipsis said:As many programs as there can be out there, none will reach, or be even close, the numbers of any synthetic benchmark. Some architectures are just more real world proof than others. Don't you agree?
dr. apocalipsis said:Each core of a Q6600 can run only one HW thread too. Whats your point?
It's not false since:
1st It's not simultaneous.
2nd PPE can't see or access any SPE cache.
3rd SPE's can't access any other SPE cache.
SPEs works as a black box to PPE. You send data to SPE. SPE process data. SPE delivers results. Cell uses a list system to organize itself, a ring bus to manage data transfers.
That it's NOT simultaneous multi threading such as any modern multi core CPU. Call it Phenom or Xenon.
IBM said:If the SPE needs to access main memory or other memory regions within the system, then it needs to initiate DMA data transfer operations. DMAs can be used to effectively transfer data between the main memory and SPE's local store, or between two different SPE local stores. Each SPE has its own DMA engine that can move data from its own local store to any other part of the system including local stores belonging to other SPEs and IO devices.
dr. apocalipsis said:Conroe was launched at 2006. AMD K9 it's from 2005.
dr. apocalipsis said:How can you be so disrepectful having proven anything?
I don't need more proofs about your lack of knowledge.
You can't distinguish between memory access and data transfer. I guess you don't know what a DMA interrupt is and how it hurts other processors performance. You certainly don't know how a qeue logic and secuences of data transfers have no place in a multithreaded system looking for high performance.
You linked a doc explaining in detail what I already pointed out, and you actually thinks It can, somehow, prove me wrong. That's only because you can't understand it. This is the very same mechanism used in SEGA Genesis to transfer data between eeprom and VRAM.
I claim a Q6600 is a way better overall CPU than a Cell. Yes, I do. You can't claim otherwise without looking as a fool.
Core architecture and K9 were launched before Cell.
I'm tired of this.
I'm tired of this.