• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

vg247-PS4: new kits shipping now, AMD A10 used as base, final version next summer

KageMaru

Member
Why MS? Why won't the PS4 have BC?

If anything Sony learned that people might do the leap easier with BC with the PS3, combined with a favorable price for the them.

Lack of cell and a switch from Nvidia to AMD would make BC hard for Sony.

Though I'm not convinced Durango will have BC either.

I didn't say that:

Both consoles could be APU + GPU and exactly the same gaming performance or Xbox3 could have some of the GPU reserved for serving to handhelds.

Can you elaborate on the bolded part?

So this super computer thing always annoyed me, what does it mean?! CELL was a super computer, now next box how the fuk is that relevant? Someone explain - if you are qualified, otherwise I don't want to hear you pull stuff out of you know where

Not sure if you consider me qualified, but all that super computer talk (for both Cell and Durango) is nothing but hyperbolic bull. Completely meaningless in regards to real world performance.
 
D

Deleted member 80556

Unconfirmed Member
The same reason MS had screwed up BC for the 360; Nvidia.

Are you telling me Sony is gonna use Nvidia for GPU? Because if I recall correctly all, if not all, rumors have pointed at both of the Big Three using AMD as a provider for GPU's.

Lack of cell and a switch from Nvidia to AMD would make BC hard for Sony.

Though I'm not convinced Durango will have BC either.

Oh, so that's what he meant. Good points, a change of GPU structure would lead to harder BC. But if the jump to next-gen is big, could it be possible?
 
Oh, so that's what he meant. Good points, a change of GPU structure would lead to harder BC. But if the jump to next-gen is big, could it be possible?
The other issue is that with the PS3 devs could pretty much do bare metal coding. On the 360 everything was done through an API (DirectX). Far, far easier to make an API compatible than work on a case by case basis with games which directly interfaced with the GPU.
 

Reiko

Banned
The other issue is that with the PS3 devs could pretty much do bare metal coding. On the 360 everything was done through an API (DirectX). Far, far easier to make an API compatible than work on a case by case basis with games coded right to the metal (directly interfacing with the GPU).

That's wrong actually. Alot of devs on the 360 actually coded to the metal.
 

i-Lo

Member
I didn't say that:

Both consoles could be APU + GPU and exactly the same gaming performance or Xbox3 could have some of the GPU reserved for serving to handhelds.

My apologies. In haste, I missed that part. However, I am still a bit confused. Do you mean that the APU+GPU will yield 2TFLOPs when the discrete GPU in question is 8xxxM?

I ask this since the following image shows that 8800M performs under 1TF. I would rationally assume that even a 8900M will not topple 1.5TF mark let alone nearly 2TF.

HD8000M.jpg
 
D

Deleted member 80556

Unconfirmed Member
The other issue is that with the PS3 devs could pretty much do bare metal coding. On the 360 everything was done through an API (DirectX). Far, far easier to make an API compatible than work on a case by case basis with games which directly interfaced with the GPU.

So Sony could use an API similar to DirectX (which seems to be at least kinda different with each game) to do BC on their next console?
 

charsace

Member
Are you telling me Sony is gonna use Nvidia for GPU? Because if I recall correctly all, if not all, rumors have pointed at both of the Big Three using AMD as a provider for GPU's.



Oh, so that's what he meant. Good points, a change of GPU structure would lead to harder BC. But if the jump to next-gen is big, could it be possible?

There are also licensing issues. One reason MS couldn't do BC is because they didn't own the graphics chip they used in the first Xbox. This had an effect on how they could go about emulating the first xbox's graphics.
 

RoboPlato

I'd be in the dick
My apologies. In haste, I missed that part. However, I am still a bit confused. Do you mean that the APU+GPU will yield 2TFLOPs when the discrete GPU in question is 8xxxM?

I ask this since the following image shows that 8800M performs under 1TF. I would rationally assume that even a 8900M will not topple 1.5TF mark let alone nearly 2TF.

http://media.pcgamer.com/files/2012/12/HD8000M.jpg[IMG][/QUOTE]

I think he meant an APU with something like an 8xxxM as a separate card to boost processing power. If Microsoft is going more custom and pretty powerful on the GPU something like that would be a cheap way of Sony getting closer in power without it being a strain on heat or TDP.
 

Karak

Member
Not sure if you consider me qualified, but all that super computer talk (for both Cell and Durango) is nothing but hyperbolic bull. Completely meaningless in regards to real world performance.

I think it does actually mean something very specific. Its specific use seems connected to some difference going on between the systems. Not in terms of total computing power but something to do with its overall setup. I of course could be wrong, but with the Seymour drop, and someone unrelated saying something along the same things and directly comparing it to something else. I don't know. Maybe its just due to the ARM chip(if there is something in there) but it isn't being used as a "THIS THING IS SO AWESOME ITS TEH SUPERPUTER" But the system is like a supercomputer. Like a muted statement. I am for sure going to ask why he said that. It was very odd and very pointed.
 
I think it does actually mean something very specific. Its specific use seems connected to some difference going on between the systems. Not in terms of total computing power but something to do with its overall setup. I of course could be wrong, but with the Seymour drop, and someone unrelated saying something along the same things and directly comparing it to something else. I don't know. Maybe its just due to the ARM chip(if there is something in there) but it isn't being used as a "THIS THING IS SO AWESOME ITS TEH SUPERPUTER" But the system is like a supercomputer. Like a muted statement. I am for sure going to ask why he said that. It was very odd and very pointed.

kagemaru is right, even generation we get a "supercomputer" and that always end up just being a bunch of BS.
 

K.Jack

Knowledge is power, guard it well
My apologies. In haste, I missed that part. However, I am still a bit confused. Do you mean that the APU+GPU will yield 2TFLOPs when the discrete GPU in question is 8xxxM?

I ask this since the following image shows that 8800M performs under 1TF. I would rationally assume that even a 8900M will not topple 1.5TF mark let alone nearly 2TF.

HD8000M.jpg

FYI, the 8800M is just a rebrand of the 7800M, which was a downclocked Radeon HD 7770, while the 7970M was a downclocked 7870. The 8900M will be a real next-gen part likely based from the 8870 or 8850.

Not that that makes your assertion about TF incorrect....
 

Tripolygon

Banned
FYI, the 8800M is just a rebrand of the 7800M, which was a downclocked Radeon HD 7770, while the 7970M was a downclocked 7870. The 8900M will be a real next-gen part likely based from the 8870 or 8850.

Not that that makes your assertion about TF incorrect....

All the announced AMD 8000M series cards are new, non of them are re-branded previous gen cards. At least that is what I got from reading AnandTech and a few other sites.
 

KageMaru

Member
I think it does actually mean something very specific. Its specific use seems connected to some difference going on between the systems. Not in terms of total computing power but something to do with its overall setup. I of course could be wrong, but with the Seymour drop, and someone unrelated saying something along the same things and directly comparing it to something else. I don't know. Maybe its just due to the ARM chip(if there is something in there) but it isn't being used as a "THIS THING IS SO AWESOME ITS TEH SUPERPUTER" But the system is like a supercomputer. Like a muted statement. I am for sure going to ask why he said that. It was very odd and very pointed.

I missed your earlier interesting update, are you sure he was referring to Seymour Cray?

I'll admit that the super computer reference done by your cousin and the quote in Sweetvar's post aren't necessarily PR like we saw so many times before, but it's still misleading in a way since when people read/hear "supercomputer" they believe the meaning is literal in some ways. Maybe context is the thing missing most, but it's still a meaningless term IMO. Edit: Not that I'm saying your cousin or Sweetvar were trying to mislead or anything. =p

Really? Got any links on that one? Last I heard you couldn't get through certification if you didn't go through the API.

No matter what platform you develop on, you're going through an API. Doesn't matter if it's DX, PSGL, libGCM, etc. you'll have to use an API since games today are far too complex to be completely written to the metal.

Here's a good thread where a few devs say the same thing, I'm sure you can figure out who knows what they are talking about in that thread ;p

http://forum.beyond3d.com/showthread.php?t=62049
 
Can you elaborate on the bolded part? " Xbox3 could have some of the GPU reserved for serving to handhelds."

Not sure if you consider me qualified, but all that super computer talk (for both Cell and Durango) is nothing but hyperbolic bull. Completely meaningless in regards to real world performance.
Super computers have a fabric that binds all the separate processes/threads. AMD just bought SeaMicro for example; "AMD’s SeaMicro SM15000™ is a revolutionary server that brings together compute, networking, and the SeaMicro Freedom™ supercompute fabric in a single 10 RU system."

This is a diagram from a Microsoft patent for the Xbox 720 as envisioned in the Xbox 720 leaked powerpoint; notice the Communication fabric.

3.png


It's essentially this:

Slide9.jpg


There are multiple APUs that have to share resources via a communication fabric. Quality of Service is integrated into the fabric as is DVR, Serving Xbox 720 games to handhelds, serving video including re-encrypting and more. All have to communicate with resources and schedule themselves, pass tokens and more. To a smaller degree all Multi-Processor Architecture platforms like envisioned in the 2014 AMD full HSA SoC have a fabric memory model so that critical resource is managed and multiple processes on multiple CPUs don't step on each other.

It's a fabric made up of millions of threads. For others as I'm sure you know this, a thread is a programming term; "Threads are one of several technologies that make it possible to execute multiple code paths concurrently inside a single application.". Now imagine millions of threads making up a communication fabric allowing the management of resources on multiple APUs.

I've been going on about this and been told that the Xbox 720 powerpoint is too old and should be ignored. Microsoft may do the above while Sony may use Nanse and their other CE platforms. That makes the PS4 cheaper but combining Nanse + PS4 more expensive than Xbox 720 that can do it all. <grin> Sony will have to pass that baton to Microsoft. SteveP has been talking fast memory and slow memory in the same system because he understands the diagram.
 

onQ123

Member
Lack of cell and a switch from Nvidia to AMD would make BC hard for Sony.

PS3 was the closet Sony has ever been to using standard parts & games was made using OpenGL ES

So PS3 might be easier to Emulate than the PS1 & PS2 when it was time for new consoles.
 
My apologies. In haste, I missed that part. However, I am still a bit confused. Do you mean that the APU+GPU will yield 2TFLOPs when the discrete GPU in question is 8xxxM?

I ask this since the following image shows that 8800M performs under 1TF. I would rationally assume that even a 8900M will not topple 1.5TF mark let alone nearly 2TF.

HD8000M.jpg
There are two GPUs missing from that chart and mobile GPUs in a console that does not have battery issues can scale up the clock speed. The APU could be .5 to 1.5 Tflops for example and the second GPU 2+ Tflops at a higher clock speed. It's possible that the Q2 2013 mobile GPUs are 20nm and in the larger planet class like Jupiter with Thebe a moon of Jupiter. Who knows and this is still speculation.
 
All this PS4 weak talk comes mostly due to the lack of talk from Sony's part. We had a bunch of people leaking 720 stuff, so it's easier to see why it has so many people knowing what they will do.

The day Sony comes up with inferior hardware is the day they stop being Sony. They might get fucked over time, but they always go for the high end stuff. The user commenting on how they are handling phones and cameras said it all - they never gimp anything.

And also, if MS is building a "Supercomputer" that only produces 2-3 TFLOPS, then it's pretty easy to achieve the same with off the shelf parts like Sony is doing with AMD.

I just think the whole conversation just went sour for Sony without us even knowing what they will offer.
 

i-Lo

Member
There are two GPUs missing from that chart and mobile GPUs in a console that does not have battery issues can scale up the clock speed. The APU could be .5 to 1.5 Tflops for example and the second GPU 2+ Tflops at a higher clock speed. It's possible that the Q2 2013 mobile GPUs are 20nm and in the larger planet class like Jupiter with Thebe a moon of Jupiter. Who knows and this is still speculation.

Ah, thank you.

So in the end, both APU's GPU and discrete GPU could work simultaneously during gameplay. As a layperson, I wonder, do the programmers need to write into the program which part of rendering is handled by which GPU? If the graphical processes of data sharing and offloading goes back and forth between the APU and GPU automatically based on the amount of work or task priority then it sounds efficient otherwise writing additional codes to apportion GPU rendering and calculation tasks separately seem like an ordeal.

Kindly help me, a technologically stunted human.
 
I just think the whole conversation just went sour for Sony without us even knowing what they will offer.

yeah, they've remained (amazingly) pretty sealed so far. i don't know if it's because some specs are still totally up in the air, if they really won't launch next year or because they just have no clue what to do (which i doubt). it feels kinda weird, these past few years everyone had trouble keeping secrets...the internet is failing us :)
 

KageMaru

Member
PS3 was the closet Sony has ever been to using standard parts & games was made using OpenGL ES

So PS3 might be easier to Emulate than the PS1 & PS2 when it was time for new consoles.

I have major doubts that the ps3 would be easier to emulate than the ps1 considering bleem came out before the 32/64-bit cycle ended =P

The day Sony comes up with inferior hardware is the day they stop being Sony.

Sony has had inferior the last two generations and were competitive with performance just this Gen.

yeah, they've remained (amazingly) pretty sealed so far. i don't know if it's because some specs are still totally up in the air, if they really won't launch next year or because they just have no clue what to do (which i doubt). it feels kinda weird, these past few years everyone had trouble keeping secrets...the internet is failing us :)

I'll tell you the specs aren't up in the air. That much is certain.
 

Karak

Member
I missed your earlier interesting update, are you sure he was referring to Seymour Cray?

I'll admit that the super computer reference done by your cousin and the quote in Sweetvar's post aren't necessarily PR like we saw so many times before, but it's still misleading in a way since when people read/hear "supercomputer" they believe the meaning is literal in some ways. Maybe context is the thing missing most, but it's still a meaningless term IMO. Edit: Not that I'm saying your cousin or Sweetvar were trying to mislead or anything. =p

No problem. I didn't highlight it much so it was easy to miss.

Context is of course a must. Here is what I "think" knowing him and also a couple things he informed me of before they went public.

I "think" that the different way in which these are being referenced versus the past "uber" speak we are always getting hints at some kind of difference. Additionally it isn't bandied around a great deal trying to explain that the new system can control missiles and such like in the past:)

When I asked about the misconception that was one of the main bits. As a tech head extraordinary...I think he did mean Clay. One thing he doesn't do is mislead. He won't answer, he will protect himself here and there, or turn mute when he can't...truly can't even hint about something. However he was the one who stopped by and he was the one broaching the subject and what that USUALLY means is something is turning quickly in his area and he can't talk openly but he can hint around or suggest or share his excitement.
But he was the one giving me a ton of data about what was going on with the new anti hacking measures that ended up being fuse burning and he was the one who, way before I read it elsewhere, had me chomping at the bit for the move from blades to the new interface about 9 months before it even went to beta. Guessing and using those two parameters gives you a somewhat good indication of one of the sections of MS he spends time developing in;)

I don't know. He rarely says much but when he does he has yet to NOT tell me the truth. So I guess time will tell:) I personally will be on the lookout for some kind of hardware partner announcement in the next couple months that is unexpected as well as a unique difference between these 2 systems. If the misconception somehow indicates the makeup of the system itself with Seymour being the dropped hint I can only expect some kind of multi-chip arrangement not done within the other system. I am not saying 1 will be ulta powerful. It could be security + something to do all the extra's we keep hearing about. Though that doesn't really hit as a misconception.
 

RoboPlato

I'd be in the dick
There are two GPUs missing from that chart and mobile GPUs in a console that does not have battery issues can scale up the clock speed. The APU could be .5 to 1.5 Tflops for example and the second GPU 2+ Tflops at a higher clock speed. It's possible that the Q2 2013 mobile GPUs are 20nm and in the larger planet class like Jupiter with Thebe a moon of Jupiter. Who knows and this is still speculation.

I never thought about it this way before. I was always thinking of scaling down a desktop GPU, not scaling up a mobile GPU. It makes a lot of sense.
 

Mitsurugi

Neo Member
The Xbox 720, supercomputer?

Am I crazy in thinking this may mean a HAL 9000 or Siri type of user interface? Using kinect or a headset, users are able shout hundreds of commands at the system. The kinect's camera be used for facial recognition ,body scanning and monitoring moods as well as scanning bar codes.
 

onQ123

Member
Why is it that Xbox 3 info always show up in the PS4 thread before it does in a Xbox 3 thread lol.



I guess it's because the less we know the more we talk so the PS4 thread stay on the front page while info for the Xbox 3 come out & the thread die down until it's something new to talk about.
 
Why is it that Xbox 3 info always show up in the PS4 thread before it does in a Xbox 3 thread lol.



I guess it's because the less we know the more we talk so the PS4 thread stay on the front page while info for the Xbox 3 come out & the thread die down until it's something new to talk about.

I just assumed this is the next gen rumor thread....
 

Mindlog

Member
Why is it that Xbox 3 info always show up in the PS4 thread before it does in a Xbox 3 thread lol.



I guess it's because the less we know the more we talk so the PS4 thread stay on the front page while info for the Xbox 3 come out & the thread die down until it's something new to talk about.
Probably has more to do with the overlap.
Dat, one console future.
 

mrklaw

MrArseFace
If Ps4 ships with a more standard CPU and GPU, how would that affect first party devs that have spent years optimising complex multicore code to take advantage of CELL? Would they move some of that code to GPU computing, or just fall back to standard coding on the main CPU?

Seems that in some respects, if GPU computing isn't embraced, well be seeing a backwards step in terms of parallel computing at least.
 

StevieP

Banned
next gen console gone beta with NO confirmed leaks is pretty.....

The 2010 documents are confirmed leaks.

If Ps4 ships with a more standard CPU and GPU, how would that affect first party devs that have spent years optimising complex multicore code to take advantage of CELL? Would they move some of that code to GPU computing, or just fall back to standard coding on the main CPU?

Seems that in some respects, if GPU computing isn't embraced, well be seeing a backwards step in terms of parallel computing at least.

I don't know about first party dev houses, but the move to x86/a standardized architecture has made quite a few people I've talked to jump for joy - literally. I don't think the transition will be too rough on them. lol
 

mrklaw

MrArseFace
Incoming potentially stupid question. Are unified shaders still the best way forward? Surely they'd use up more space than an optimised pixel or vertex shader, so you'd get less on a chip? Would there be any benefit in biasing eg towards more dedicated pixel shaders and maybe having the CPU assist with vertices?

Not sure if anyone has analysed current gen engines to see where the load generally goes?
 

mrklaw

MrArseFace
The 2010 documents are confirmed leaks.



I don't know about first party dev houses, but the move to x86/a standardized architecture has made quite a few people I've talked to jump for joy - literally. I don't think the transition will be too rough on them. lol


But simple to develop for doesn't necessarily mean more powerful or efficient. When you see how multicore inefficient many PC ports are, isn't that a little disconcerting, letting all that power go to waste?
 

StevieP

Banned
But simple to develop for doesn't necessarily mean more powerful or efficient. When you see how multicore inefficient many PC ports are, isn't that a little disconcerting, letting all that power go to waste?

Time and money/manpower (or lack thereof on some ports) and I'm sure the abstraction layer on hardware doesn't provide a net advantage there, being a bit thicker of an API in Windows.
 

Ashes

Banned
What's in the first run dev kit doesn't actually represent what's in the final console. It's not an uncommon thing. Just mimicking an approximation of the development environment.

Oh I understand that. But using competitor hardware seems like a bad a idea.
 
Top Bottom