• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS4 Rumors , APU code named 'Liverpool' Radeon HD 7970 GPU Steamroller CPU 16GB Flash

Status
Not open for further replies.

Sid

Member
Looking at the various rumors i think ps4 will have 2gb gddr5 only but according to you guys is 2gb gddr5+2gb ddr3 feasible in terms of costs and size of the motherboard?
 
Have you guys seen this on Kotaku?

original.jpg


Looks like a hybrid between the original Phat and the Slim. I don't like it.
 

RaijinFY

Member
The easiest thing for sony to do would be to get all the programmers from polyphony digital, naughty Dog and gureilla games fly them to tokyo and let them decide what they would want from a future ps4 cpu, gpu and ofourse the type and size of memory.

These 3 developers should call the shots. Not unreliable third party devs, AMD or other companies with their own vested interests

I think it would be easy to guess what they would want. Powerful GPU, capable CPU, lots of ram (4GB), good amount of bandwidth.
 
That you quoted it all, yes.
That he formats his posts to fit the reply box, no.

I format my post my posts to fit the reply box sometimes, it makes them somewhat easier to read.

If it's so easy to read then my quoting shouldn't be a problem.

Anyway, on topic, I suspect that the PS4 (and NextBox) will be somewhere in between the "it'll be super cheap" camp and the "powerhouse for the ages" camp. Reality is usually pretty boring.
 

MadOdorMachine

No additional functions
So where does the single HDMI output put the HDMI pass thru and other Jeff Rigby speculation at? I don't remember everything he was saying, but I thought there was some speculation of cloud gaming and such. Is this remodel a show stopper?
 
So where does the single HDMI output put the HDMI pass thru and other Jeff Rigby speculation at? I don't remember everything he was saying, but I thought there was some speculation of cloud gaming and such. Is this remodel a show stopper?
If the pictures are accurate then yes, no HDMI pass-thru. The other speculation maybe but less likely. IF Microsoft does release a Xbox 361 then for casual use the PS3 can't compete except for free Internet if that difference lasts.

Digitimes rumor has to be 100% false as there is no Kinect like port or HDMI pass-thru in the pictures. If pictures are real this impacts my speculation. If I'm totally wrong then No BC for PS4, PS3 dead within 2 years or another refresh coming with low power modes which doesn't make sense to me.
 

Globox_82

Banned
If the pictures are accurate then yes, no HDMI pass-thru. The other speculation maybe but less likely. IF Microsoft does release a Xbox 361 then for casual use the PS3 can't compete except for free Internet if that difference lasts.

Digitimes rumor has to be 100% false as there is no Kinect like port or HDMI pass-thru in the pictures. If pictures are real this impacts my speculation. If I'm totally wrong then No BC for PS4, PS3 dead within 2 years or another refresh coming with low power modes which doesn't make sense to me.

that's why you shouldn't even try to make sense out of all this in the first place.
 

mrklaw

MrArseFace
If the pictures are accurate then yes, no HDMI pass-thru. The other speculation maybe but less likely. IF Microsoft does release a Xbox 361 then for casual use the PS3 can't compete except for free Internet if that difference lasts.

Digitimes rumor has to be 100% false as there is no Kinect like port or HDMI pass-thru in the pictures. If pictures are real this impacts my speculation. If I'm totally wrong then No BC for PS4, PS3 dead within 2 years or another refresh coming with low power modes which doesn't make sense to me.

For casual use as you out it, wouldn't free access to streaming services like Netflix be a fundamental win for Sony compared to xbox putting them behind a pay wall? Tech is nice for enthusiasts, but the mainstream wants simple and cheap right now
 
For casual use as you out it, wouldn't free access to streaming services like Netflix be a fundamental win for Sony compared to xbox putting them behind a pay wall? Tech is nice for enthusiasts, but the mainstream wants simple and cheap right now
Agreed but simple and cheap is a Roku or Android box that sells for $49.00 and gives you free Internet access too.
 

TheD

The Detective
I can't wait for x86 to die off. These 3D transistors (and wafers) are only temporary slow down on moores law for this architecture type. As many bottlenecks as the Cell had, it is far more impressive than any x86 I've seen (not saying it's the most powerful... just for what it is). If people invested more into this, maybe even a cell with more than a measly 16kb per SPE and better memory controllers.. etc... we'd get some ridiculous breakthroughs. By far the biggest bottleneck was memory. Not being able to access ram fast enough, that's why XDR was needed.

Maybe I'm just spewing crap, but as everyone is pointing to, parallel processing is the future, and the Cell was one of first cpu's that was not only widely used, but was highly parallel.


You want the second fastest processor ISA to die off for a dead end CPU (Cell)?!
Nothing stops x86 from having a lot of small, slow cores, it is just not a very good idea for most performance cases.

Cell at this point is really old tech, Sandy (and Ivy) Bridge are much more impressive.

Also lol, the LS on each SPE is 256KB, not 16!

X86 as an ISA I think is going to die, processors made by AMD and Intel without X86 ISA microcode will continue as they are state of the art out of order processors. There would have to be a new ISA accepted and I'd guess that AMD is trying to make that happen with the HSA Foundation and releasing as Open source their Fusion code.

Fusion will NOT kill off x86!

Fusion CPUs use x86 cores! The idea behind them is to make the GPU an extension of the CPU.

Well sandy bridge is designed alot differently than the PPE. You can't really compare them on flops.

Thus my point, FLOPS only make up a small amount of how fast a processor is.
 

Clear

CliffyB's Cock Holster
jeff_rigby said:
Digitimes rumor has to be 100% false as there is no Kinect like port or HDMI pass-thru in the pictures. If pictures are real this impacts my speculation. If I'm totally wrong then No BC for PS4, PS3 dead within 2 years or another refresh coming with low power modes which doesn't make sense to me.

With respect Jeff I think you're overreacting somewhat.

PS3 will not be dead within 2 years. Sure it'll be deprioritized in the primary saturated markets by that point thanks to all the new platforms rolling out in that time, but it'll have a long life ahead of it in the emerging markets (BRIC and others).

Sony will not drop PS3 until it exceeds 100m units shipped, ~80m in primaries (easily attainable within the next 18months) plus 20m from trickle-down in the years to come.

The race is long and so long as they can shift PS3 units without loss they will continue to do so, with cost/market conscious refactors continuing also . The beauty is that PS3 supports its own storefront so they won't face the same issues of rapidly deteriorating software sales like they had with PS2.
 

Globox_82

Banned
With respect Jeff I think you're overreacting somewhat.

PS3 will not be dead within 2 years. Sure it'll be deprioritized in the primary saturated markets by that point thanks to all the new platforms rolling out in that time, but it'll have a long life ahead of it in the emerging markets (BRIC and others).

Sony will not drop PS3 until it exceeds 100m units shipped, ~80m in primaries (easily attainable within the next 18months) plus 20m from trickle-down in the years to come.

The race is long and so long as they can shift PS3 units without loss they will continue to do so, with cost/market conscious refactors continuing also . The beauty is that PS3 supports its own storefront so they won't face the same issues of rapidly deteriorating software sales like they had with PS2.

well said. PS2 is still selling
 

Grim1ock

Banned
I'm not sure about PS1 but looking at all the PlayStation consoles & handhelds sony seem to have a thing for fast ram,

they put 4MB of edram in the PS2 GPU , used 2MB in the PSP GPU ,

placed 256MB of GDDR3 on the PS3 GPU & used XDR for the main memory


Vita has 128MB of wide I/O vRam & 512MB of Ram stacked on the GPU\CPU in the SOC



so speed seem to be a big deal to them when it come to ram.


so I'm expecting the PS4 to have it's memory stacked on the GPU/CPU how ever they have it setup.

The more advanced and complex a machine gets the need for fast high bandwidth RAM inreases. And that's only scratching the surface when you consider gaming is only one part of a console's tick boxes. Next generation gaming is going to be highly dependent on size and speed of RAM and it's one thing devs cry bout this gen.

Sony has always went with their own interpretation of fast ram for their consoles and the playstation 4 will be no different. take the cell for example. All that clock speed is meaningless if the data can't be accessed quick enough. which is why sony decided to go with XDR RAM as it was the fastest available memory at the time. They will go for a fast high bandwidth next generation ram for their console and will either partner RAMBUS again or go with micron.


I think it would be easy to guess what they would want. Powerful GPU, capable CPU, lots of ram (4GB), good amount of bandwidth.

That's a given. But nothing beats having the feedback from dozens and dozens of talented programmers at your disposal. Sony's teams have been parallel coding since 2006 and right now they have loads of software and coding libraries.

What they have learned so far, what they want in the future and what improvements they would like to see from the ps4 would be highly advantageous from sony's hardare engineers point of view. Think of it like a formula one car where the designers rely upon the drivers for feedback and where to improve the car.

In other words their input would be to the point rather than vague and non specific like third parties
 
Nothing wrong with trying to make sense of things. I've just never seen anything that supports the notion of a PS3.5.
Thanks; there appears to be more than one 4000 series chassis.

http://www.theverge.com/2012/7/5/3138430/playstation-3-cech-4001x-sony-fcc said:
A pair of debugging stations are also present in the filing, DECH-4001x and DECH-S4001x, suggesting that Sony is putting at least some new hardware through its paces — debugging stations are traditionally used when there's been a change to the chips inside. This could indeed be an all-new slimmer model, but is just as likely to be an internal hardware change as Sony continues to cut the console's cost.
 
Thus my point, FLOPS only make up a small amount of how fast a processor is.

Well thats not true. Flops are the end all be all metric as to how a part will perform. Performance will theoretically and practically never go past that number. What you should note about flops is by themselves they can be misleading in regards to the application your are pursuing.

In regards to x86, I think ARM has a more promising future than it does at the moment.
 

missile

Member
Cool thank you for taking the time to elaborate\explain very fascinating, hopefully we start moving that direction in the near future, ...
We already did.

As larger the problem (i.e. games) becomes as more developers need to look
at the data instead of focusing at the code. Data is law. Code isn't. The
reason this isn't so obvious yet is that the scale of most games seems not
so big. But in reality the size increase at a rapid rate. See last vs. this
gen. And as the problem size increases the computational resources are eaten
up in non-linear proportion (see the O-notation and the runtime memory
requirements of many algorithms).

Let me give an easy example; a nxn two-dimensional gaming world scales in
memory at a rate of n². Now imagine an algorithm which goes over each
element of the gaming world to update the state (a variable for example) for
each element. Ok. Now the same problem in three dimensions. We have, nxnxn =
n³ memory requirements and an algorithm that updates each element as well.
Hence, we have to bring in n³ data just to update one or perhaps two
elements. The processor will idle most of the time waiting until the data is
within the registers. For that reason we have to compute more stuff per
element such that the memory transfer is worth anything. And the problem
increases as n increases since the exponent will take effect much stronger,
i.e. (fictional) 64² is nothing, 126² is nothing as well, 512² is solvable,
and 1024² might be kind of a challeng. But contrast this situation to 64³,
512³, and 1024³. The jump from 512³ to 1024³ needs eight times the resources
just by doubling n. So if you need 4GB for the problem of 512³, you will
need 32GB for the problem at 1024³, and we still want to have everything
presented at 60fps on the screen.

This puts a heavy burden on the entire system esp. on the memory subsystem.
The computational performance of any fast processors means nothing if the
data can't be brought in fast enough. And increasing the bandwidth and
reducing memory latency is, well, the core issue of all. Bandwidth needs
pins, wires, high clock rates and so on. Wires need well designed circuity,
wires needs to be short, etc. Worst of all, wires produce massive amount of
heat if data is transported over them. From a physical standpoint,
transporting data a distance is a much bigger problem then flipping a
transistor at terrahertz speed. And to reduce latency a memory chip must be
at first small (to cut out distance, since the electrons need time to travel
through the wires), second, it needs to be close to the processor, and third,
no control logic (unlike cache memories). That, for example, is the reason
each SPEs within the Cell processor has 'only' 256KB of (software managed)
local store. A 512KB memory would have increased the cycle count in
retrieving the data considerably making the SPU starve for data.

Another, physical, issue is that the space around a computational unit is
quite small. So one cannot fit an arbitrary amount of memory very close to a
computational unit. Hence, in general, and that's an insight into parallel
efficiency right there, a computational unit is broken up in parts such that
more memory can be located in the vicinity of each part. By breaking up the
computational unit and memory, these computational parts, being units again,
can be fed faster - if done right. The drawback however is that one now has
to route the data through the system more tightly. Some systems do so by
telling you that everything is just fine. The word here is memory mapping.
The system maps the broken-up memory (different address spaces) into a
linear memory map, just like if there would only be one big chunk of memory.
What gives? An ease in programming. One just dereferences a memory address
and the system brings in the data - no matter how long it takes, whatever
the current situation is on the bus, and no matter if it's ok trashing your
cache midways. That's all fine if system resources are not of a prime and if
you just want to surf the web, fill a spreadsheet, or play small games.

But as the problem size of many things, not only games, increases, care must
be given of how your data rushes though the system. So depending on where we
want to go with games, we might have to go back to give the data (layout,
transport) a new look. Back, because this is all old news. Back in the days
microprocessor build for scientific computing (vector processors) had always
some units for controlling how the data runs through the system. Why?
Perhaps scientific problem are huge in size? It was for a good reason that
the Cell processor was used in building the fastest and first computer in
the world, called IBM Roadrunner, that was (finally) able to break the 1
PetaFlop barrier in 2008. This was a more than a ten years research effort.
Many were in the battle. It's not only about having a fast processor, it's
about the entire system that has to make the differences as once Seymour
Cray said; "Anyone can build a fast CPU. The trick is to build a fast
system. "

Anyhow. The current state of affairs is that many developers say; don't hurt
me. They just don't want to dig into it and blame the hardware later for
being too difficult to program, too slow, whatsoever .. . as the reason why
their game sucks in the first place, i.e why they can't cash in the easy
way. Most of these developers are just using tools composing games together.
But if their tools can't solve a given problem . . . they're stuck.

I wonder what mainstream product or technology could spur a BIG push for parallel\DMA based processing versus threading.
Games.

What's quite interesting is that some developers in the industry think that
we somehow have reach a limit in graphics, physics etc. just because they
have reach a limit. They proclaim it is just not worth anymore to do
something just because everything is there already. Graphics done, physics
done,.... Here is the engine, just pay us! Well, that's an effective way
keeping the competition down and to stay in business, i.e. keeping you out
of the game.

However, I can tell from a mathematical perspective that we have just
started. Many of the cool things, esp. if it comes to real physics, are
hidden in the realm of partial differential equations (PDEs). Just imagine a
fully destructible environment based on real physics where everything can be
destructed while applying the right force to it, where each material brick
has a volume (and a material property, perhaps changing over time) that can
break apart, melt, evaporate, etc. depending on the force(s) applied. Well,
don't get me wrong. I mean real destructibility. That's something different
from what you see in current games. Or what about real 3d explosion with a
real pressure wave that has an impact on material structures? What about
fluids that behave like fluids (and not like nvidia's Kepler fluids), fluids
that have a real impact, fluids that are not restricted to some fixed sized
container. And all this coming with a high resolution in space, time, and
screen resolution and everything . . . everything in realtime. So we are at
the end? Quite the contrary is true.

Once we can solve PDEs in realtime (to a degree suitable), our gaming worlds
will look different from what we see today. In essence, our current gaming
worlds will look static against what will be possible. PDEs are not only
about representing worlds more real, they can also be guided and used to
change the rules of the gaming world, i.e. in building non-physical worlds
that aren't possible to build in the real world by applying/adding
artificial forces, tension etc. Such things can be used as a design tool to
build fantasy worlds you haven't dreamed off. For example, you can build an
entire world out of water. Every form withing this world is kept in shape by
an artificial design-driven force. With a push of a button you can make this
world splash in an instant by removing the forces that are holding these
forms together. The possibilities are virtually infinite.

Solving those PDEs in the manner described requires everything from a system!


Well, I have to get back programming my Cosmac CHIP-8 game further. . . xD



PS: For some reason I'm bounded by 80 characters per line.
 
We already did.

As larger the problem (i.e. games) becomes as more developers need to look
at the data instead of focusing at the code. Data is law. Code isn't. The
reason this isn't so obvious yet is that the scale of most games seems not
so big. But in reality the size increase at a rapid rate. See last vs. this
gen. And as the problem size increases the computational resources are eaten
up in non-linear proportion (see the O-notation and the runtime memory
requirements of many algorithms).

Let me give an easy example; a nxn two-dimensional gaming world scales in
memory at a rate of n². Now imagine an algorithm which goes over each
element of the gaming world to update the state (a variable for example) for
each element. Ok. Now the same problem in three dimensions. We have, nxnxn =
n³ memory requirements and an algorithm that updates each element as well.
Hence, we have to bring in n³ data just to update one or perhaps two
elements. The processor will idle most of the time waiting until the data is
within the registers. For that reason we have to compute more stuff per
element such that the memory transfer is worth anything. And the problem
increases as n increases since the exponent will take effect much stronger,
i.e. (fictional) 64² is nothing, 126² is nothing as well, 512² is solvable,
and 1024² might be kind of a challeng. But contrast this situation to 64³,
512³, and 1024³. The jump from 512³ to 1024³ needs eight times the resources
just by doubling n. So if you need 4GB for the problem of 512³, you will
need 32GB for the problem at 1024³, and we still want to have everything
presented at 60fps on the screen.

This puts a heavy burden on the entire system esp. on the memory subsystem.
The computational performance of any fast processors means nothing if the
data can't be brought in fast enough. And increasing the bandwidth and
reducing memory latency is, well, the core issue of all. Bandwidth needs
pins, wires, high clock rates and so on. Wires need well designed circuity,
wires needs to be short, etc. Worst of all, wires produce massive amount of
heat if data is transported over them. From a physical standpoint,
transporting data a distance is a much bigger problem then flipping a
transistor at terrahertz speed. And to reduce latency a memory chip must be
at first small (to cut out distance, since the electrons need time to travel
through the wires), second, it needs to be close to the processor, and third,
no control logic (unlike cache memories). That, for example, is the reason
each SPEs within the Cell processor has 'only' 256KB of (software managed)
local store. A 512KB memory would have increased the cycle count in
retrieving the data considerably making the SPU starve for data.

Another, physical, issue is that the space around a computational unit is
quite small. So one cannot fit an arbitrary amount of memory very close to a
computational unit. Hence, in general, and that's an insight into parallel
efficiency right there, a computational unit is broken up in parts such that
more memory can be located in the vicinity of each part. By breaking up the
computational unit and memory, these computational parts, being units again,
can be fed faster - if done right. The drawback however is that one now has
to route the data through the system more tightly. Some systems do so by
telling you that everything is just fine. The word here is memory mapping.
The system maps the broken-up memory (different address spaces) into a
linear memory map, just like if there would only be one big chunk of memory.
What gives? An ease in programming. One just dereferences a memory address
and the system brings in the data - no matter how long it takes, whatever
the current situation is on the bus, and no matter if it's ok trashing your
cache midways. That's all fine if system resources are not of a prime and if
you just want to surf the web, fill a spreadsheet, or play small games.

But as the problem size of many things, not only games, increases, care must
be given of how your data rushes though the system. So depending on where we
want to go with games, we might have to go back to give the data (layout,
transport) a new look. Back, because this is all old news. Back in the days
microprocessor build for scientific computing (vector processors) had always
some units for controlling how the data runs through the system. Why?
Perhaps scientific problem are huge in size? It was for a good reason that
the Cell processor was used in building the fastest and first computer in
the world, called IBM Roadrunner, that was (finally) able to break the 1
PetaFlop barrier in 2008. This was a more than a ten years research effort.
Many were in the battle. It's not only about having a fast processor, it's
about the entire system that has to make the differences as once Seymour
Cray said; "Anyone can build a fast CPU. The trick is to build a fast
system. "

Anyhow. The current state of affairs is that many developers say; don't hurt
me. They just don't want to dig into it and blame the hardware later for
being too difficult to program, too slow, whatsoever .. . as the reason why
their game sucks in the first place, i.e why they can't cash in the easy
way. Most of these developers are just using tools composing games together.
But if their tools can't solve a given problem . . . they're stuck.


Games.

What's quite interesting is that some developers in the industry think that
we somehow have reach a limit in graphics, physics etc. just because they
have reach a limit. They proclaim it is just not worth anymore to do
something just because everything is there already. Graphics done, physics
done,.... Here is the engine, just pay us! Well, that's an effective way
keeping the competition down and to stay in business, i.e. keeping you out
of the game.

However, I can tell from a mathematical perspective that we have just
started. Many of the cool things, esp. if it comes to real physics, are
hidden in the realm of partial differential equations (PDEs). Just imagine a
fully destructible environment based on real physics where everything can be
destructed while applying the right force to it, where each material brick
has a volume (and a material property, perhaps changing over time) that can
break apart, melt, evaporate, etc. depending on the force(s) applied. Well,
don't get me wrong. I mean real destructibility. That's something different
from what you see in current games. Or what about real 3d explosion with a
real pressure wave that has an impact on material structures? What about
fluids that behave like fluids (and not like nvidia's Kepler fluids), fluids
that have a real impact, fluids that are not restricted to some fixed sized
container. And all this coming with a high resolution in space, time, and
screen resolution and everything . . . everything in realtime. So we are at
the end? Quite the contrary is true.

Once we can solve PDEs in realtime (to a degree suitable), our gaming worlds
will look different from what we see today. In essence, our current gaming
worlds will look static against what will be possible. PDEs are not only
about representing worlds more real, they can also be guided and used to
change the rules of the gaming world, i.e. in building non-physical worlds
that aren't possible to build in the real world by applying/adding
artificial forces, tension etc. Such things can be used as a design tool to
build fantasy worlds you haven't dreamed off. For example, you can build an
entire world out of water. Every form withing this world is kept in shape by
an artificial design-driven force. With a push of a button you can make this
world splash in an instant by removing the forces that are holding these
forms together. The possibilities are virtually infinite.

Solving those PDEs in the manner described requires everything from a system!


Well, I have to get back programming my Cosmac CHIP-8 game further. . . xD



PS: For some reason I'm bounded by 80 characters per line.

I could follow most of this but man do you go low. My inferior high-level brain can't take much more ;)

I will say though, in the overall CS community there seems to be a lost focus on data structures and dataflow management. Im still in the student stage(almost done though!!) but overall there is more of a focus on making managed languages like C# the standard where alot of work is done for you instead of teaching all of the solid fundamentals we should learn. The walls we are running into now is very much like you say, less and less concern is being placed on data in favor of a more uniform code based approach. I'm probably in no place to criticize but from the outside looking in I don't think developers had a good strong grasp of their code and where it could be broken down into a model that isn't the Wintel thread-centric norm. I think this is what stalled alot of advancement and choked performance.
 

Sid

Member
If sony integrates cell with the ps4 motherboard(like emotion engine with the ps3) then will the ps3 emulation be possible?
 

onQ123

Member
If they won't, can we still play PSN games on the new system?

Hopefully Sony was smart enough to set a development envelopment for PSN games so the code can be run on different hardware since PSN games don't push the hardware too much.

It would make a lot of sense to do so considering the vast ps3 library and the well fleshed out playstation store,even vita has psp hardware for partial hardware emulation

I don't think their is any PSP hardware inside the PS Vita it's full software emulation as far as I know.
 

Donnie

Member
Well thats not true. Flops are the end all be all metric as to how a part will perform. Performance will theoretically and practically never go past that number. What you should note about flops is by themselves they can be misleading in regards to the application your are pursuing.

In regards to x86, I think ARM has a more promising future than it does at the moment.

Flops tell you how many floating point operations a chip can theoretically performance in a second. Which is significant, but definitely not the end all be all metric as to how a part will perform, not even close, they're a good indication though.
 

CorrisD

badchoiceboobies
Hopefully Sony was smart enough to set a development envelopment for PSN games so the code can be run on different hardware since PSN games don't push the hardware too much.

They might not be as advanced, but PSN games still use the same aspects of the hardware that full retail games do.
Wipeout HD was a PSN game first and formost but still took advantage of the hardware like many other titles, unless they have managed to work out how to emulate the hardware I see them having to drop BC for everything but I image PS1/2 games unless they are willing to stick a Cell chip in there for a first run.

But here is hoping they are more competent with the PS4 than they were with the PS3 in regards to BC.
 

onQ123

Member
They might not be as advanced, but PSN games still use the same aspects of the hardware that full retail games do.
Wipeout HD was a PSN game first and formost but still took advantage of the hardware like many other titles, unless they have managed to work out how to emulate the hardware I see them having to drop BC for everything but I image PS1/2 games unless they are willing to stick a Cell chip in there for a first run.

But here is hoping they are more competent with the PS4 than they were with the PS3 in regards to BC.

I think they have learned enough about Emulation in the last 6 years to be able to pull something off maybe a virtual Cell & a virtual RSX


No it doesn't. No one has found any trace of PSP hardware in the Vita.

Yep & this is what give me a little hope that they have made a lot of advancements in Emulation.

but still I think Cell is going to be hard to pull of because of the SPEs but who knows what they have planned.
 
I think they have learned enough about Emulation in the last 6 years to be able to pull something off maybe a virtual Cell & a virtual RSX




Yep & this is what give me a little hope that they have made a lot of advancements in Emulation.

but still I think Cell is going to be hard to pull of because of the SPEs but who knows what they have planned.

I still really hope (as crazy it may seem) that Sony puts in those modules that Jeff talks about constantly. Or at least beefed up ones or something, I don't know. I just want to play my damn PS3 games on a PS4 without any problems.
 
That's probably why they just bought Gaikai. So they can offer cloud gaming for PS1/PS2/PS3/PSN games.

http://www.usatoday.com/tech/gaming/story/2012-07-02/sony-gaikai-streaming-games/55989860/1

I don't see them emulating console games like that. Demos. Sure. PS Home. Sure (they can easily recode that) but having clusters of cells or ps3s... or whatever just doesn't seem like a sound investment. Since games for the PS3 are coded extremely low level onto the Cell they can't just run 3 PS3 games on each blade and it automatically take up the resources. Hell I can take Crysis, and run 3 instances of it on my computer. You wouldn't be able to get the software to scale to the hardware like that. They just can't do that considering they'd need a shit ton of them to accommodate everyone. Might as well include the hardware into the device and use the extra power for more features.
 
Yes, I am still hopeful that the rumored audio processor and media encoding hardware are in fact provided by 6-8 SPEs integrated on to the APU.
 

patsu

Member
Yes, I am still hopeful that the rumored audio processor and media encoding hardware are in fact provided by 6-8 SPEs integrated on to the APU.

What is this rumor ? Do you have a link ?

EDIT:
Also, is this Slim PS3 16Gb rumor for real ?

Is there really a stripped down 16Gb PS3, or is it just a typo or a bad rumor ? If so, can we hook it up with a regular PS3 so that we can have a parallel/dual PS3 system ? I can think of a tons of use for such a system.
 
What is this rumor ? Do you have a link ?

EDIT:
Also, is this PS3 16Gb rumor for real ?

If there is really a stripped down 16Gb PS3, or is it just a typo or a bad rumor ? If so, can we hook it up with a regular PS3 so that we can have a parallel/dual PS3 system ? I can think of a tons of use for such a system.

I never read of such a rumor, but I think he means hoping for some "cell" type thing in the PS4 to do that sort of thing.
 
Forgot about that just that quick. I would say that unless they dropped it, it's still going to be there. The question obviously will be how they accomplish it. I'd be surprised if they kept SPEs though.
AMD has a 4 way cross bar switch for CPU packages in their Fusion HSA APU. In CPU packages for instance, 4 Jaguar CPU cores can be in one package or 1 Bulldozer and FPUs or combinations of these. It looks like multiples of 4 CPU packages with at least 1 FPU element in the sub package with Bulldozer designed to work with the FPU. Developer platforms have 4 Bulldozer (or Steamroller) cores with FPUs using the 4 way cross bar switch. There is a rumor that this is going to change to 2 packages of 4 Jaguar CPUs and no FPUs. This leaves room for 2 more CPU packages on the 4 way switch which could be 1PPU4SPU CPU packages that can provide BC as well as advanced FPU duties and more.

My understanding is that AMD is not providing FPU (Floating Point Co-processors) in the Jaguar CPU packages because a GPGPU can provide those features. This is fine for a Laptop or consumer desktop but not for a game console where setup latency for a GPU or the critical use of a GPU for graphics creates issues.

Sony either filed and published the 1PPU4SPU patent because they were going to go 100% AMD Fusion PPU&SPUs+GPU and not X86+GPU or they plan to use the SPUs in addition to using AMD's X86+GPU fusion and their libraries, the last makes sense as X86 would be better at prefetch and branching for GPU.

AMD includes dedicated codec hardware which is similar to SPUs. Sony has a large investment in SPU codec and video processing software and plans to use the PS3 DVD upscale to Blu-ray code in the PS3 in their blu-ray players to upscale blu-ray to 4K. I'd guess that they would want to use this code in a PS4.

Plans for the Xbox 720 have the platform receiving a full 1080P h.264 DASH stream and we must assume h.265 after Jan 2013, decoding then assembling the transport stream into a video stream then recoding and serving a IPTV DASH stream to a handheld at a resolution needed by the handheld with DRM intact. All this real time in the background while playing a game. PS4 is assumed to have the same ability with 4K video resolution supported. See a use in this for a few SPUs.

Same 1PPU4SPU wafers could be used in a PS3 refresh to SOC which is the most economical refresh and Cell can not be used in such a SOC or in the PS4 SOC thus the reason for the 1PPU4SPU Patent. Dumb PS3 Cell refresh or complete redesign like the Xbox 360S to SOC which was not possible using Cell. Why refresh the Xbox 360S; ultrawide embedded RAM is now available in small custom packages for inclusion in a SOC? A need for more powerful hardware to support more accurate Kinect OS routines? 4K and h.265 codec support? Trustzone ARM CPU to be included? Lots of reasons beyond the fact that Low power modes have to be designed into both PS3 and Xbox360 or they can't be sold in California within 2 years.
 
Flops tell you how many floating point operations a chip can theoretically performance in a second. Which is significant, but definitely not the end all be all metric as to how a part will perform, not even close, they're a good indication though.

This is the same thing I just said?
 
Forgot about that just that quick. I would say that unless they dropped it, it's still going to be there. The question obviously will be how they accomplish it. I'd be surprised if they kept SPEs though.

I forgot that as well. It would be awesome if they did use it though since the SPEs can be used for graphics (and other things) as well.
 

CorrisD

badchoiceboobies
What is this rumor ? Do you have a link ?

EDIT:
Also, is this Slim PS3 16Gb rumor for real ?

Is there really a stripped down 16Gb PS3, or is it just a typo or a bad rumor ? If so, can we hook it up with a regular PS3 so that we can have a parallel/dual PS3 system ? I can think of a tons of use for such a system.

It isn't a rumour, we are getting a second PS3 "slim", PS3 Slim Slim or Slimmer if you will, lol.

One of the models on the listing is a 16gb model that this point we can assume is like the 4gb 360 console, it will have built 16gb and will allow you to stick your own hard drive in there without having to take the precious one out.
It seems to be the lowest price model to not only compete with the cheaper 360 models, but also the WiiU by trying to attract the casual audience with things like Wonderbook.
 
It isn't a rumour, we are getting a second PS3 "slim", PS3 Slim Slim or Slimmer if you will, lol.

One of the models on the listing is a 16gb model that this point we can assume is like the 4gb 360 console, it will have built 16gb and will allow you to stick your own hard drive in there without having to take the precious one out.
It seems to be the lowest price model to not only compete with the cheaper 360 models, but also the WiiU by trying to attract the casual audience with things like Wonderbook.

I honestly see a 150 price point. Not everyone does, but there are two other HDD models, and they CERTAINLY shouldn't price their 500 gig model 300 bucks just because it has a bigger HDD. Costs should be going down, not up.
 

CorrisD

badchoiceboobies
I honestly see a 150 price point. Not everyone does, but there are two other HDD models, and they CERTAINLY shouldn't price their 500 gig model 300 bucks just because it has a bigger HDD. Costs should be going down, not up.

It would be great, we were expecting the normal HDD models to come down before we knew there was another redesign, $199 is what I would have assumed before we knew there was a 16gb model that was going to arrive too.
Would be great for them to reach a $150 price point, and hopefully make upgrading with a HDD easy for those that want to.
 

Sid

Member
Can anyone tell me how powerful is the cpu in this apu package?are we talking high-end i5 power here,above that or less?
 
It would be great, we were expecting the normal HDD models to come down before we knew there was another redesign, $199 is what I would have assumed before we knew there was a 16gb model that was going to arrive too.
Would be great for them to reach a $150 price point, and hopefully make upgrading with a HDD easy for those that want to.

I'm thinking 150-200-250.

Though, I think they should just do two SKUs... not 3... Sony has a problem with making 5000 thousand different SKUs. How many Bravias do they have? Lol.
 
Status
Not open for further replies.
Top Bottom