• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS4 Rumors , APU code named 'Liverpool' Radeon HD 7970 GPU Steamroller CPU 16GB Flash

Status
Not open for further replies.

Clear

CliffyB's Cock Holster
tinfoilhatman said:
Weird from what I understand it's far easier\faster to develop for say an multi-core x86 CPU with OOO processing(or even the Xbox IBM multi-core) than anything exotic like cell where parallel processing is not an option but a requirement.

So forgetting about performance programming for processors that have to process data in parallel is just as easy as conventional development? I'm not a developer(I work closely with them and QA teams) but that goes against everything I've ever heard

There was a problem at the start of this gen when coders were largely indoctrinated into the Wintel thread-based programming model rather than the "job"-based approach that is vastly more efficient in a multi-core environment. Parallellization is the future, not super-fast single-cores, so this is/was a necessary transition.

Also, you need to remember that on a coding team there are people working at a whole range of levels in relation to the hardware. Engine guys will be looking at low-level optimization but more will just be pumping out largely platform-agnostic mid-level C, and some will be using higher level scripting solutions like LUA or some bespoke equivalent for maximum portability.

Point is, the bulk of the code-base is going to be serving the needs of the design/content rather than the hardware its running on.

What affects things most on a day-to-day basis are the quality of tools and debug systems because errors in implementation are inevitable regardless of hardware. The more complex the hardware the more important this aspect becomes, but the key thing to remember is that once these issues are solved, they are solved. Once you've got the tools and a staff experienced at using them, you shouldn't need to be constantly revisiting this stuff even on systems which are really picky about memory/data alignment like PS3.

Code is maths, its a pyramid of knowledge which is constantly built up layer-upon-layer. So, "difficulties" tend to be of decreasing significance the longer the platform lasts in any case - which is why shifting CPU families is kind of a double-edged sword. You still have the initial "bump" where existing staff need to retrain/acquaint themselves with the new tools, tech, and its peculiarities.

More than anything else though, managers want results not excuses. Coders are paid problem-solvers, so whinging about the difficulty of a particular piece of hardware really doesn't cut-it. If coder x is struggling, you simply replace him with someone who can handle it, and ideally relish the challenge.
 

StevieP

Banned
Well sandy bridge is designed alot differently than the PPE. You can't really compare them on flops.

You can compare many CPUs to eachother, they have their own strengths and weaknesses. The P4 Prescott was designed quite differently from Sandy Bridge as well (and is a deep-pipeline, high stage CPU - much like the Waternoose/PPE) but there aren't many areas where today's Sandy/Ivy's from Intel don't come out ahead.
 

mrklaw

MrArseFace
What would be the point? You would not be able to run any Android apps. Android is just a light version of Linux and the PS3 is probably based on FreeBSD, so they are cousins.

It'd let you run google TV. It's a possibility for PS4 but I don't see it for PS3 due to the hardware changes needed for passthrough, and Sony clearly need to cost reduce for this latter period of the PS3's life, not add new things
 
You can compare many CPUs to eachother, they have their own strengths and weaknesses. The P4 Prescott was designed quite differently from Sandy Bridge as well (and is a deep-pipeline, high stage CPU - much like the Waternoose/PPE) but there aren't many areas where today's Sandy/Ivy's from Intel don't come out ahead.

You can but a direct comparison between a PPE and anything in the Core family and beyond is quite disingenuous going off specs.


So Cell had nothing to do with increased devl costs and decreased performance for many PS3 multi-platform games?


Weird from what I understand it's far easier\faster to develop for say an multi-core x86 CPU with OOO processing(or even the Xbox IBM multi-core) than anything exotic like cell where parallel processing is not an option but a requirement.

So forgetting about performance programming for processors that have to process data in parallel(and in order) is just as easy as conventional development? I'm not a developer(I work closely with them and QA teams) but that goes against everything I've ever heard

Parallel processing is the future, theres really no way around that. Dies are getting harder and harder to shrink.
 

mrklaw

MrArseFace
If dies are getting hard to shrink, why is intel using half the area for integrated graphics?nincan understand the appeal for cheap laptops, but surely there is also a market for filling that silicon with CPU cores. You could probably get 8-12 cores on there
 
You can but a direct comparison between a PPE and anything in the Core family and beyond is quite disingenuous going off specs.

Parallel processing is the future, theres really no way around that. Dies are getting harder and harder to shrink.

I can't wait for x86 to die off. These 3D transistors (and wafers) are only temporary slow down on moores law for this architecture type. As many bottlenecks as the Cell had, it is far more impressive than any x86 I've seen (not saying it's the most powerful... just for what it is). If people invested more into this, maybe even a cell with more than a measly 16kb per SPE and better memory controllers.. etc... we'd get some ridiculous breakthroughs. By far the biggest bottleneck was memory. Not being able to access ram fast enough, that's why XDR was needed.

Maybe I'm just spewing crap, but as everyone is pointing to, parallel processing is the future, and the Cell was one of first cpu's that was not only widely used, but was highly parallel.
 

StevieP

Banned
I can't wait for x86 to die off. These 3D transistors (and wafers) are only temporary slow down on moores law for this architecture type. As many bottlenecks as the Cell had, it is far more impressive than any x86 I've seen (not saying it's the most powerful... just for what it is). If people invested more into this, maybe even a cell with more than a measly 16kb per SPE and better memory controllers.. etc... we'd get some ridiculous breakthroughs. By far the biggest bottleneck was memory. Not being able to access ram fast enough, that's why XDR was needed.

Maybe I'm just spewing crap, but as everyone is pointing to, parallel processing is the future, and the Cell was one of first cpu's that was not only widely used, but was highly parallel.

There are some pretty good reasons why we're not using Cells in our desktop PCs.
You can probably google some of IBM's reasons, certainly, when they discontinued putting any R&D into it and basically wrote it off. IBM still puts r&d into some of its other decade-old architectures and ISAs, just as a context to that.

The world's biggest and smartest chip manufacturer also had a similar initiative (that had some similarities to Cell) that was cancelled in 2010 due to disappointing performance.

x86 isn't going away.
 

tinfoilhatman

all of my posts are my avatar
You can but a direct comparison between a PPE and anything in the Core family and beyond is quite disingenuous going off specs.




Parallel processing is the future, theres really no way around that. Dies are getting harder and harder to shrink.


So when is mainstream development going to go that route, I just don't see it happening anytime soon.

If it's so much more efficient with no time or technical drawbacks when is parallel processors like cell going to replace the conventional Intel\AMD CPU and why is it taking so long it's not like this is a new concept?
 

Razgreez

Member
If dies are getting hard to shrink, why is intel using half the area for integrated graphics?nincan understand the appeal for cheap laptops, but surely there is also a market for filling that silicon with CPU cores. You could probably get 8-12 cores on there

And watch as efficiency and profitability fly out the window. And that's only one of the reasons. Intel is looking at becoming a one-stop-shop. A company doesn't grow by producing the same product continuously. Diversity and innovation breeds growth

One must also remember more than half of all computing devices are mobile (not including phones) so thermal efficiency is paramount


So when is mainstream development going to go that route, I just don't see it happening anytime soon.

If it's so much more efficient with no time or technical drawbacks when is parallel processors like cell going to replace the conventional Intel\AMD CPU and why is it taking so long it's not like this is a new concept?

Likely never. IBM developed the cell and it was left up to other companies to invest in it. Few did therefore it makes no economical sense to spend a large quantity of time and research pursuing it any further.
 
So when is mainstream development going to go that route, I just don't see it happening anytime soon, when is something like cell going to replace the conventional Intel\AMD CPU and why is it taking so long?

Its not that something like cell is gonna replace the Amd/Intel products. Cell is a different beast entirely. The one thing about Cell that is here to stay is the paradigm. Look at GPGPU's to see what im talking about.

Intel is billions of dollars deep into parallel processing and im sure AMD is too.

The thing is many programmers who learned the trade at a way earlier time learned a fundamentally different way. If Intel/Amd said fuck it and phased out the current architectures the industry would grind to a halt because of the overall lack of knowledge and talent of parallel programming.
 

tinfoilhatman

all of my posts are my avatar
Its not that something like cell is gonna replace the Amd/Intel products. Cell is a different beast entirely. The one thing about Cell that is here to stay is the paradigm. Look at GPGPU's to see what im talking about.

Intel is billions of dollars deep into parallel processing and im sure AMD is too.

Yea I can't argue with the legacy people midset holding it back I'm just not convinced it's appropriate for all applications and that their are no complexity\cost drawbacks in going with parallel processors and development.

Also why does it "appear" that Sony and Microsoft are actually moving AWAY from this direction for their future CPU parts instead of towards it, it jsut doesn't make any sense to me UNLESS development costs and tramlines are a major decision factor.
 
There are some pretty good reasons why we're not using Cells in our desktop PCs.
You can probably google some of IBM's reasons, certainly, when they discontinued putting any R&D into it and basically wrote it off. IBM still puts r&d into some of its other decade-old architectures and ISAs, just as a context to that.

The world's biggest and smartest chip manufacturer also had a similar initiative (that had some similarities to Cell) that was cancelled in 2010 due to disappointing performance.

x86 isn't going away.

Yeah, I understand all of this stuff, but general computing is kind of disappearing. A lot of processes now are often media related. Of course it would kill legacy support, but HSA would still allow for a that to work with ISA, wouldn't it?
 
There are some pretty good reasons why we're not using Cells in our desktop PCs.
You can probably google some of IBM's reasons, certainly, when they discontinued putting any R&D into it and basically wrote it off. IBM still puts r&d into some of its other decade-old architectures and ISAs, just as a context to that.

The world's biggest and smartest chip manufacturer also had a similar initiative (that had some similarities to Cell) that was cancelled in 2010 due to disappointing performance.

x86 isn't going away.
X86 as an ISA I think is going to die, processors made by AMD and Intel without X86 ISA microcode will continue as they are state of the art out of order processors. There would have to be a new ISA accepted and I'd guess that AMD is trying to make that happen with the HSA Foundation and releasing as Open source their Fusion code.
 
Yea I can't argue with the legacy people midset holding it back I'm just not convinced it's appropriate for all applications and that their are no complexity\cost drawbacks in going with parallel processors and development.

Also why does it "appear" that Sony and Microsoft are actually moving AWAY from this direction for their future CPU parts instead of towards it, it jsut doesn't make any sense to me UNLESS development costs and tramlines are a major decision factor.

Why would you say that? Im pretty sure both their machines will have GPGPUs and multiple cpu cores that will ultimately require a good bit of parallelization to utilize. Biggest difference this time is that there are 10x better tools and documentation compared to 05-06....


If you are genuinely interested this book is a great read.(its free to read online)
http://books.nap.edu/catalog.php?record_id=12980
 
X86 as an ISA I think is going to die, processors made by AMD and Intel without X86 ISA microcode will continue as they are state of the art out of order processors.

One thing GPGPUs and the like can do is create a shift in graphical focus from raster to vector. It allows for more versatility, and these high number crunching cpu's should be able to handle them easily. Vector Data Size < Raster Data Size.
 
Here is an interesting read i found days ago regarding the ps4.

http://www.edepot.com/playstation4.html

In reality how far is this from the truth? Says the ps4 might use holographic versatile disk(hvd)which has a 6tb disk capacity , quad full hd,1024MB of DDR4 main system memory, and 1024MB of GDDR5 memory located inside the RSX2 chip using four 256MB Samsung chips. I doubt Sony will move away from their blu ray disks, but is there a chance they might use hvd?
 
Here is an interesting read i found days ago regarding the ps4.

http://www.edepot.com/playstation4.html

In reality how far is this from the truth? Says the ps4 might use holographic versatile disk(hvd)which has a 6tb disk capacity , quad full hd,1024MB of DDR4 main system memory, and 1024MB of GDDR5 memory located inside the RSX2 chip using four 256MB Samsung chips. I doubt Sony will move away from their blu ray disks, but is there a chance they might use hvd?

Even Kutaragi wouldn't be that insane.
 

DieH@rd

Banned
Here is an interesting read i found days ago regarding the ps4.

http://www.edepot.com/playstation4.html

In reality how far is this from the truth? Says the ps4 might use holographic versatile disk(hvd)which has a 6tb disk capacity , quad full hd,1024MB of DDR4 main system memory, and 1024MB of GDDR5 memory located inside the RSX2 chip using four 256MB Samsung chips. I doubt Sony will move away from their blu ray disks, but is there a chance they might use hvd?

There is no need for Sony to move away from Bluray in next 8-10 years. Almost every bluray player with a upgradeable firmware can be tweeaked to support 4 layer [100gb] bluray discs. That's more than enough even for 4K movies [h265].

As for 1gb ddr4 memory and 1gb gddt4 integrated onto gpu... there is no point thinking about that right now. We have almost ZERO "realiable" info about PS4 memory right now.
 

tinfoilhatman

all of my posts are my avatar
Why would you say that? Im pretty sure both their machines will have GPGPUs and multiple cpu cores that will ultimately require a good bit of parallelization to utilize. Biggest difference this time is that there are 10x better tools and documentation compared to 05-06....


If you are genuinely interested this book is a great read.(its free to read online)
http://books.nap.edu/catalog.php?record_id=12980


Dumb question but with the 360 was parallel processing ever used(with the CPU), I was under the impression it wasn't and this was something unique to GPU and processors like Cell?

Thanks, gonna check out that book, hopefully I can figure out how to dump it to my Kindle for the bus\train.
 

Elios83

Member
Here is an interesting read i found days ago regarding the ps4.

http://www.edepot.com/playstation4.html

In reality how far is this from the truth? Says the ps4 might use holographic versatile disk(hvd)which has a 6tb disk capacity , quad full hd,1024MB of DDR4 main system memory, and 1024MB of GDDR5 memory located inside the RSX2 chip using four 256MB Samsung chips. I doubt Sony will move away from their blu ray disks, but is there a chance they might use hvd?

No reason for that, the best we'll get is a Blu Ray drive with 4 layers support and 16X reading.
 

RoboPlato

I'd be in the dick
Here is an interesting read i found days ago regarding the ps4.

http://www.edepot.com/playstation4.html

In reality how far is this from the truth? Says the ps4 might use holographic versatile disk(hvd)which has a 6tb disk capacity , quad full hd,1024MB of DDR4 main system memory, and 1024MB of GDDR5 memory located inside the RSX2 chip using four 256MB Samsung chips. I doubt Sony will move away from their blu ray disks, but is there a chance they might use hvd?
If they use anything that has a higher capacity than blu-ray it'll be BDXL which holds 128Gb. Quad HD is also not happening on PS4, unless it gets added in for movies/menus later on in the gen. No games, except maybe a couple PSN titles, will run in that resolution either.

BTW does anyone know if blu-ray read speeds have caught up with DVD yet? Being able to stream faster on blu-ray would be a huge benefit for next gen games since the slow speed of the PS3 reader really held it back on some games.
 

Interesting. Further supports my PS4/720 2014 launch theory. Sony has been saying the same thing. I wonder how much "waiting tell a significant jump is possible" pertains to memory densities...

As in delaying the launch another 6-12 months from the holiday 2013 timeframe, would we see more changes then just more memory? Is the GPU and CPU architecture gonna have a chance of being upgraded too, or will it most likely remain unchanged?
 
Interesting. Further supports my PS4/720 2014 launch theory. Sony has been saying the same thing. I wonder how much "waiting tell a significant jump is possible" pertains to memory densities...

As in delaying the launch another 6-12 months from the holiday 2013 timeframe, would we see more changes then just more memory? Is the GPU and CPU architecture gonna have a chance of being upgraded too, or will most likely remain mostly unchanged?


pretty sure Sony/MS don't give a shit what Epic thinks when it comes to console release timing. It's a business decision that they'll make.
 

Iacobellis

Junior Member
Here is an interesting read i found days ago regarding the ps4.

http://www.edepot.com/playstation4.html

In reality how far is this from the truth? Says the ps4 might use holographic versatile disk(hvd)which has a 6tb disk capacity , quad full hd,1024MB of DDR4 main system memory, and 1024MB of GDDR5 memory located inside the RSX2 chip using four 256MB Samsung chips. I doubt Sony will move away from their blu ray disks, but is there a chance they might use hvd?

1GB of RAM is tiny for a game console set to release in 2013. The PS2 has 32MB of RAM and PS3 made a jump to 256MB, which was impressive for 2006. Most personal computers didn't even come equipped with 1GB of memory yet. The bare minimum for the new consoles should be 2GB. If Sony and Microsoft are going to be focused on multimedia hubs with multitasking abilities and improved games, 1GB will not cut it.
 
pretty sure Sony/MS don't give a shit what Epic thinks when it comes to console release timing. It's a business decision that they'll make.

I would agree, yes. But in this case Sony has been saying the same, even to its stockholders. Its not a case of Sony agreeing with Epic, but that they just so happen to think the same way.

1GB of RAM is tiny for a game console set to release in 2013. The PS2 has 32MB of RAM and PS3 made a jump to 256MB, which was impressive for 2006. Most personal computers didn't even come equipped with 1GB of memory yet. The bare minimum for the new consoles should be 2GB. If Sony and Microsoft are going to be focused on multimedia hubs with multitasking abilities and improved games, 1GB will not cut it.

it would be 1gb of system RAM and 1GB of video memory for a total of 2GB's. I agree that 2GB is to low, hence one of the reasons we might not see this console in 2013.
 

Iacobellis

Junior Member
I would agree, yes. But in this case Sony has been saying the same, even to its stockholders. Its not a case of Sony agreeing with Epic, but they just so happen to think the same way.



it would be 1gb of system RAM and 1GB of video memory for a total of 2GB's. I agree that 2GB is to low, hence one of the reasons we might not see this console in 2013.

1GB of RAM for video would be fine, but the main RAM should be at least 2GB.
 
Here is an interesting read i found days ago regarding the ps4.

http://www.edepot.com/playstation4.html

In reality how far is this from the truth? Says the ps4 might use holographic versatile disk(hvd)which has a 6tb disk capacity , quad full hd,1024MB of DDR4 main system memory, and 1024MB of GDDR5 memory located inside the RSX2 chip using four 256MB Samsung chips. I doubt Sony will move away from their blu ray disks, but is there a chance they might use hvd?

That is almost certainly super old.

Here's my favorite part:

Evolution of poker on PlayStation

A category of games on the PlayStation systems that really have evolved is the poker games. From the Card Shark classic game for PS1 up until the great High Stakes on the Vegas Strip: poker edition title for the PS3. poker games have proven their worth on consoles such as the PlayStation and we just look forward to see what's next on the PlayStation 4.
 

missile

Member
There's no way that any choice of CPU is going to reduce costs or shorten project timelines in any case. Dream on, game dev doesn't work like that. The biggest culprits for projects overrunning are always management flip-flopping on direction and the like.
Perhaps he, Mr. tinfoilhatman, should have a look at the book Masters of
Doom. It's striking.

So Cell had nothing to do with increased devl costs and decreased performance for many PS3 multi-platform games? ...
There are some initial cost for adapting to the architecture. But these
costs are rather low compared to the cost of content development. And I
can't see how Cell's architecture has brought any shortage of games. On the
contrary it brought many high quality games.

... Weird from what I understand it's far easier\faster to develop for say an multi-core x86 CPU with OOO processing(or even the Xbox IBM multi-core) than anything exotic like cell where parallel processing is not an option but a requirement.

So forgetting about performance programming for processors that have to process data in parallel(and in order) is just as easy as conventional development? I'm not a developer(I work closely with them and QA teams) but that goes against everything I've ever heard
There was a problem at the start of this gen when coders were largely indoctrinated into the Wintel thread-based programming model rather than the "job"-based approach that is vastly more efficient in a multi-core environment. Parallellization is the future, not super-fast single-cores, so this is/was a necessary transition.

Also, you need to remember that on a coding team there are people working at a whole range of levels in relation to the hardware. Engine guys will be looking at low-level optimization but more will just be pumping out largely platform-agnostic mid-level C, and some will be using higher level scripting solutions like LUA or some bespoke equivalent for maximum portability.

Point is, the bulk of the code-base is going to be serving the needs of the design/content rather than the hardware its running on.

What affects things most on a day-to-day basis are the quality of tools and debug systems because errors in implementation are inevitable regardless of hardware. The more complex the hardware the more important this aspect becomes, but the key thing to remember is that once these issues are solved, they are solved. Once you've got the tools and a staff experienced at using them, you shouldn't need to be constantly revisiting this stuff even on systems which are really picky about memory/data alignment like PS3.

Code is maths, its a pyramid of knowledge which is constantly built up layer-upon-layer. So, "difficulties" tend to be of decreasing significance the longer the platform lasts in any case - which is why shifting CPU families is kind of a double-edged sword. You still have the initial "bump" where existing staff need to retrain/acquaint themselves with the new tools, tech, and its peculiarities.

More than anything else though, managers want results not excuses. Coders are paid problem-solvers, so whinging about the difficulty of a particular piece of hardware really doesn't cut-it. If coder x is struggling, you simply replace him with someone who can handle it, and ideally relish the challenge.

QFT

Dumb question but with the 360 was parallel processing ever used(with the CPU), I was under the impression it wasn't and this was something unique to GPU and processors like Cell? ...
In terms of parallel processing, everything that applies to Cell also applies
to the 360s Xenon processor. However, the programming model is different.
The Xenon processor, consisting of three modified PowerPC processors, is
based on the threading model working out of a uniform memory system where
each memory call is served by a fixed memory controller. Such system can and
have served many applications and have increased efficiency of processors
during the last decade. However, this threading model has its limits in
terms of scalability. It becomes inefficient as the number of threads
increases. Management of all the threads needs more fine grained control. To
do this threads are broken up into jobs. The difference is that the
communication is also broken up, i. e. the communication (the memory
transfer) is made explicit - user driven. And this allows for greater
utilization of multicore processors. In a threading model all the
communication happens implicit - done automatically by the processor's
control logic guessing what's best scheduling all the threads and utilizing
its busses. But this logic has its limits. Usually if you have more threads
than processors in a system, bus contention issues and over-utilization of
certain units will arises and will lead to a degradation in performance.
This is all know since ages. As the workload increases in a threading model
it becomes less and less efficient in utilizing system resources. And since
the workload for games increases at a huge rate, the programming model has
to change as well. What make the adaption so difficult is that x86
architecture in charge dictates the threading model. Intel will tell you how
good their processors cope with your threads esp. IF YOU BUY THEIR
COMPILERS! That's where their knowledge is. Their entire compiler suit is
optimized for the threading model. There are hundreds of patents into it.
They sell books and throw threading libraries at you like nothing else. So
no reason to change the x86 architecture. They just cash in. Btw; there is a
thread library on Cell that abstracts the SPEs into pthreads, see IBMs
libspe 2 on PS3 Linux. So you can bombard the SPE cores with threads as
well. But soon you will find out that it will be better to utilize the
communication unit (the MFC; memory flow controller) within each SPE to
enhance your game performance a multifold as is done with Sony's SPURS
library, a job scheduling library for the SPEs. Quite a bunch of PS3 games
use that.

But don't get me wrong on the thread model. It has its uses, esp in the
casual world. And I don't shoot for a job model or explicit DMA programming,
if I just want to cut a small even cool game together. However, there was a
reason Cell was created, and there was also a reason why it differs from
x86.

I can't wait for x86 to die off. These 3D transistors (and wafers) are only temporary slow down on moores law for this architecture type. As many bottlenecks as the Cell had, it is far more impressive than any x86 I've seen (not saying it's the most powerful... just for what it is). If people invested more into this, maybe even a cell with more than a measly 16kb per SPE and better memory controllers.. etc... we'd get some ridiculous breakthroughs. By far the biggest bottleneck was memory. Not being able to access ram fast enough, that's why XDR was needed.
Won't die any time soon if ever. Some of the graphics guys need to come down
making a new processor emulating the x86 for compatibility. Anyhow, AMD
missed the chance transforming the architecture with its AMD64/x86_64 ISA.

Anyhow, I hope the PS4 pwns.
 
Perhaps he, Mr. tinfoilhatman, should have a look at the book Masters of
Doom. It's striking.


There are some initial cost for adapting to the architecture. But these
costs are rather low compared to the cost of content development. And I
can't see how Cell's architecture has brought any shortage of games. On the
contrary it brought many high quality games.




QFT


In terms of parallel processing, everything that applies to Cell also applies
to the 360s Xenon processor. However, the programming model is different.
The Xenon processor, consisting of three modified PowerPC processors, is
based on the threading model working out of a uniform memory system where
each memory call is served by a fixed memory controller. Such system can and
have served many applications and have increased efficiency of processors
during the last decade. However, this threading model has its limits in
terms of scalability. It becomes inefficient as the number of threads
increases. Management of all the threads needs more fine grained control. To
do this threads are broken up into jobs. The difference is that the
communication is also broken up, i. e. the communication (the memory
transfer) is made explicit - user driven. And this allows for greater
utilization of multicore processors. In a threading model all the
communication happens implicit - done automatically by the processor's
control logic guessing what's best scheduling all the threads and utilizing
its busses. But this logic has its limits. Usually if you have more threads
than processors in a system, bus contention issues and over-utilization of
certain units will arises and will lead to a degradation in performance.
This is all know since ages. As the workload increases in a threading model
it becomes less and less efficient in utilizing system resources. And since
the workload for games increases at a huge rate, the programming model has
to change as well. What make the adaption so difficult is that x86
architecture in charge dictates the threading model. Intel will tell you how
good their processors cope with your threads esp. IF YOU BUY THEIR
COMPILERS! That's where their knowledge is. Their entire compiler suit is
optimized for the threading model. There are hundreds of patents into it.
They sell books and throw threading libraries at you like nothing else. So
no reason to change the x86 architecture. They just cash in. Btw; there is a
thread library on Cell that abstracts the SPEs into pthreads, see IBMs
libspe 2 on PS3 Linux. So you can bombard the SPE cores with threads as
well. But soon you will find out that it will be better to utilize the
communication unit (the MFC; memory flow controller) within each SPE to
enhance your game performance a multifold as is done with Sony's SPURS
library, a job scheduling library for the SPEs. Quite a bunch of PS3 games
use that.

But don't get me wrong on the thread model. It has its uses, esp in the
casual world. And I don't shoot for a job model or explicit DMA programming,
if I just want to cut a small even cool game together. However, there was a
reason Cell was created, and there was also a reason why it differs from
x86.


Won't die any time soon if ever. Some of the graphics guys need to come down
making a new processor emulating the x86 for compatibility. Anyhow, AMD
missed the chance transforming the architecture with its AMD64/x86_64 ISA.

Anyhow, I hope the PS4 pwns.

Why do you type like that?
 

tinfoilhatman

all of my posts are my avatar
Perhaps he, Mr. tinfoilhatman, should have a look at the book Masters of
Doom. It's striking.


There are some initial cost for adapting to the architecture. But these
costs are rather low compared to the cost of content development. And I
can't see how Cell's architecture has brought any shortage of games. On the
contrary it brought many high quality games.

Cool thank you for taking the time to elaborate\explain very fascinating, hopefully we start moving that direction in the near future, I wonder what mainstream product or technology could spur a BIG push for parallel\DMA based processing versus threading.
 

mrklaw

MrArseFace
1GB DDR4 and 1GB GDDR5 sounds odd. That's not unified memory when indications were that Sony were going unified this time.if they have a 'system' pool of ram, wouldn't it be much easier to increase the amount compared to when they were looking at all GDDR5? 4+1 would be nice
 

Grim1ock

Banned
The easiest thing for sony to do would be to get all the programmers from polyphony digital, naughty Dog and gureilla games fly them to tokyo and let them decide what they would want from a future ps4 cpu, gpu and ofourse the type and size of memory.

These 3 developers should call the shots. Not unreliable third party devs, AMD or other companies with their own vested interests
 

onQ123

Member

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
I'm not sure about PS1 but looking at all the PlayStation consoles & handhelds sony seem to have a thing for fast ram,

they put 4MB of edram in the PS2 GPU , used 2MB in the PSP GPU ,

placed 256MB of GDDR3 on the PS3 GPU & used XDR for the main memory


Vita has 128MB of wide I/O vRam & 512MB or Ram stacked on the GPU\CPU in the SOC



so speed seem to be a big deal to them when it come to ram.


so I'm expecting the PS4 to have it's memory stacked on the GPU/CPU how ever they have it setup.

Wow great points.
 
Status
Not open for further replies.
Top Bottom