• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

vg247-PS4: new kits shipping now, AMD A10 used as base, final version next summer

Ashes

Banned
Point being stressed here is that you wouldn't be buying GDDR5 because you wouldn't needed to.


Xbox + Kinect on page 7
 

Ashes

Banned
fast 512k > 1gb slow.

edit: Would be cool to see a high performance graphic card where the situation is reversed. As in one manufacture, where all things being equal opts for more ram with less bandwidth, and another opts for less ram with greater bandwidth.
 
Any GDDR5 that's worthwhile will get hot. That's why cards with GDDR5 are a lot more expensive, have bigger fans, and higher power draws.

I'm not disputing that it gets hot, memory is generally one of the hottest parts on a card. Im disputing the "very" part. Im willing to bet its pretty much in line with the XDR + GDDR3(Which was alot worse) setup. Would cost about the same too.
 

nib95

Banned
I'm not disputing that it gets hot, memory is generally one of the hottest parts on a card. Im disputing the "very" part. Im willing to bet its pretty much in line with the XDR + GDDR3(Which was alot worse) setup. Would cost about the same too.

Yea, GDDR5 gets hot, but I'm not sure about very. You can actually stick passive aluminium sinks on to each GDDR5 ram module on a GPU and it'll run fine. You don't have to have it connected to the main heat sink and fan. In-fact, many water cooled systems have it set up that way, with the water block only connected to the main chipset, whilst the ram modules are just passively cooled.
 
I wonder what graphic card manufacturers like MSI or EVGA, NVidia, or AMD prefer when they make/produce their cards....

they dont have esram. and the pc environment is completely different. pc cards have to handle up to eyefinity triple monitor resolutions and 200 fps.

anyways, even some pc gpu's could get by with 70gb/s. i want to say 7770's ship with around that amount of bandwidth on gddr5.

Edit: Yup, 72 GB/s

http://www.amd.com/US/PRODUCTS/DESKTOP/GRAPHICS/7000/7770/Pages/radeon-7770.aspx#2

72GB/s memory bandwidth (maximum)

Actually now that makes me wonder why ESRAM at all LOL. If the main DDR bandwidth was reasonably enough to max a 7770...

I guess you have to account for CPU bandwidth too?

I hope it wasnt the case it's actually a 128 bit bus to main. I still worry about that a little. But surely not since no devs have correct the 68 GB/s figure.
 
I don't even see why DDR4 even gets brought up. And I see people are stuck on the whole stacking craze, still.

Because DDR4 production should be ramping up soon, DDR3 is basically at the end of its life, and for systems expected to live another 6-10 years, DDR4 will be a lot cheaper over the long haul.
 

Ashes

Banned
they dont have esram. and the pc environment is completely different. pc cards have to handle up to eyefinity triple monitor resolutions and 200 fps.

anyways, even some pc gpu's could get by with 70gb/s. i want to say 7770's ship with around that amount of bandwidth on gddr5.

True. I can't argue with that [Because I don't know how that works]. But if you are building a gaming rig, and you ask the entire graphic card industry whether they would want the better quality ram or greater quantity ram, what would they say?

It's a delicate balance. Both are important.
 
Because DDR4 production should be ramping up soon, DDR3 is basically at the end of its life, and for systems expected to live another 6-10 years, DDR4 will be a lot cheaper over the long haul.

yeah, but the gddr3 and xdr in ps3/720 is really old. and they seem to have been able to cost reduce those systems ok.

shrug.

anyways i hear it's ddr3 in durango. specifically not ddr4. not saying that it'll stay for retail.
 
they dont have esram. and the pc environment is completely different. pc cards have to handle up to eyefinity triple monitor resolutions and 200 fps.

anyways, even some pc gpu's could get by with 70gb/s. i want to say 7770's ship with around that amount of bandwidth on gddr5.

Edit: Yup, 72 GB/s

http://www.amd.com/US/PRODUCTS/DESKTOP/GRAPHICS/7000/7770/Pages/radeon-7770.aspx#2



Actually now that makes me wonder why ESRAM at all LOL. If the main DDR bandwidth was reasonably enough to max a 7770...

I guess you have to account for CPU bandwidth too?

I hope it wasnt the case it's actually a 128 bit bus to main. I still worry about that a little. But surely not since no devs have correct the 68 GB/s figure.

Where do you get the impression that 72GB/s can max a 7770? They maybe use this sort of RAM, but that doesn't mean it would not go faster with faster RAM. They also want to sell their better cards, that's why they don't make a cheap card as good as possible. It's very different with consoles, here they have to absolutely optimize all their components for maximum power.
 

mrklaw

MrArseFace
Because DDR4 production should be ramping up soon, DDR3 is basically at the end of its life, and for systems expected to live another 6-10 years, DDR4 will be a lot cheaper over the long haul.


Just bad timing. Another 12-18 months and it'd be a shoe-in, but it is what it is, and Sony need to move now.

Maybe we'll have a relatively short gen, if the machines are profitable quickly there is less pressure to elongate the gen to get returns, and tech advances in the next couple of years might make a decent jump doable (will need GPUs to get a lot more powerful per watt though)
 

Ashes

Banned
anyways, even some pc gpu's could get by with 70gb/s. i want to say 7770's ship with around that amount of bandwidth on gddr5.

Edit: Yup, 72 GB/s

http://www.amd.com/US/PRODUCTS/DESKTOP/GRAPHICS/7000/7770/Pages/radeon-7770.aspx#2



Actually now that makes me wonder why ESRAM at all LOL. If the main DDR bandwidth was reasonably enough to max a 7770...

I guess you have to account for CPU bandwidth too?

I hope it wasnt the case it's actually a 128 bit bus to main. I still worry about that a little. But surely not since no devs have correct the 68 GB/s figure.

er no. It's more cost and power efficient to use 128-bit wide buses in combination with GDDR5 memory than 256-bit with >GDDR5 memory.

That's a value card I'm guessing. I doubt x3602 would use 128 bit memory to be honest. But I don't know anymore.
 

Razgreez

Member
Just bad thing. Another 12-18 months and it'd be a shoe-in, but it is what it is, and Sony need to move now.

Maybe we'll have a relatively short gen, if the machines are profitable quickly there is less pressure to elongate the gen to get returns, and tech advances in the next couple of years might make a decent jump doable (will need GPUs to get a lot more powerful per watt though)

Where does the 12-18 months figure come from? DDR4 chips are already in production (although they were produced before the specifications were finalized) but that's similar to the HDMI situation with the PS3
 
er no. It's more cost and power efficient to use 128-bit wide buses in combination with GDDR5 memory than 256-bit with >GDDR5 memory.

That's a value card I'm guessing. I doubt x3602 would use 128 bit memory to be honest. But I don't know anymore.

huh? your post doesnt make a lot of sense.

but no, thats not a value card, thats the reference 7770 specifications.
 

Ashes

Banned
huh? your post doesnt make a lot of sense.

but no, thats not a value card, thats the reference 7770 specifications.

Well, i guess we'll have to agree to disagree:

The Sapphire HD 7700 series is a family of graphics cards targeting the enthusiast on a budget and mainstream users looking for increased graphics performance. It is based on the second family of GPUs from AMD built in its 28nm process and featuring the highly acclaimed GCN optimized graphics processing architecture.

edit: Oh you used the ghz reference specification. Didn't catch that... I guess it's a little bit better than the enthusiast on a budget. Sorry about that.
 
Because DDR4 production should be ramping up soon, DDR3 is basically at the end of its life, and for systems expected to live another 6-10 years, DDR4 will be a lot cheaper over the long haul.
Correct but I'd add that at a moderate memory speed you can only stack 4 DDR3 type DRAM on top of each other because of heat. DDR4 is designed to run cooler so at the same clock speed you can stack 8 on top of each other. GDDR5 can't be stacked or if it is, it's running at a much slower clock speed.

The stacked wide buss memory at slower than GDDR5 speeds is where the efficiency comes from both in lower Drive voltage because the memory is closer to logic with shorter traces because you don't have traces from memory chip to memory chip and more transfer per clock. Think of a large short pipe at low pressure transferring more than a much smaller pipe at very high pressure. The large short pipe not needing pressure has a more efficient design needing less energy for the pump, less current from the MMU.

With a large contract (Console volume) DDR4 Custom memory is going to be available, in quantity and cheap by the middle of this year. Cites for this are already in NeoGAF threads. If there was some concern about availability then 4 high stacked DDR3 but two chips instead of one would be used side by side.

The PS4 and I think the Xbox 3 will both use stacked Wide IO Ram and this will jump start DDR4 stacked memory for the rest of the industry (Notebooks, tablets).

1) Thebe is on LPM silicon not performance silicon (Jaguar CPUs with a Kabini/Temesh/Samara/Pennar APU base design)
2) LPM silicon will have clock speeds BELOW 2 Ghz
3) This includes the memory controller clock speed!!!!
4) A Wide IO buss is REQUIRED. You can't use GDDR5 as that requires a higher memory clock speed.
5) Even with a wide IO buss of 512 bits, memory bandwidth will be about 100 GB/sec.

8000M GPUs are on LPM silicon but 8000 series GPUs are on performance silicon. Mobile GPUs also don't include the subset of the GNB to support audio, video and DP or HDMI port. Mobile GPUs are no longer binned.

Even at the slower memory clock the GNB is going to have issues with efficiency (GNB contains the memory controller). The custom GNB may be at 20nm.

Sony is going for a very efficient near future AMD notebook design for cost reasons. They will put more money in to the accessories like the S3D head mounted glasses, this is where I think both are taking next generation as PCs can't compete there. Min spec is 5X PS3 memory bandwidth and 10X PS3 GPU to support glassless S3D on 4K TVs which requires 4 video streams and internally the 4K TV turns this into 10 sweet spot glassless S3D views.

http://semiaccurate.com/forums/showpost.php?p=176201&postcount=1229

http://www.neogaf.com/forum/showpost.php?p=46761114&postcount=7498

http://www.neogaf.com/forum/showpost.php?p=46765921&postcount=7553
 

Snubbers

Member
If you ask to developers what they prefer... less but faster ram? or more but slower ram?... what is the consensus?

Even if they answered you, it wouldn't make any difference in the discussion of Orbis/Durango..

The aim of developers in trying to max things out as a system, so ensuring the 'throughput' of data is as high as possible, and RAM interface bandwidth is just one part of the chain..

It's been stated by people in the know that Shaders are usually sat idle waiting for data in a PC architecture, so getting 80%+ Utilisation of them for graphical purposes is quite hard to do..

Whilst I don't believe MS have got an overall better package, I can see that they are making some effort to try and maximise the abililty to up shader utilisation so they can make more of what they have.. This is possible because they aren't constrained by PC interface architectures that need to conform to an industry standard most of the time..

All this means is that in theory, Orbis has the higher numbers, and that's unlikely to change.. however, how much of that sizeable on-paper 'difference' Durango can claw back through increasing the supposed system efficiency is anyones guess..

My take is simple.. with MS reserving 2 cores of the CPU (That's upto 25% less CPU resource for the game) and having 6 CU's less (assuming identical CU makeup), I think they still have a final deficit of 20-25% over Orbis.. Which loosely, by some measures, people might think isn't too bad, however I think it'll lead to a much more obvious gap to the typical GAF frequenter then this generation..
 

mrklaw

MrArseFace
Where does the 12-18 months figure come from? DDR4 chips are already in production (although they were produced before the specifications were finalized) but that's similar to the HDMI situation with the PS3

Develop the process (pretty much done), prove it in volume production with good yields, design your architecture around it, produce your console based on that. If you're doing fully stacked ram, its a fundamental part of your system and almost everything is dependent on it.
 

Razgreez

Member
Hey you guys why is Microsoft in the HMC consortium?

edit: oops wrong thread... But at least it might be relevant!

Unlikely to be relevant since roadmap indicates production start at the end of this year whether or not specification is finalized - meaning best case scenario. That leaves little time to ramp up production for either of these consoles
 

Ashes

Banned
Unlikely to be relevant since roadmap indicates production start at the end of this year whether or not specification is finalized - meaning best case scenario. That leaves little time to ramp up production for either of these consoles

I just wanted to post Microsoft PR:

May 9, 2012 -

The Hybrid Memory Cube Consortium (HMCC), led by Micron Technology, Inc. (Nasdaq:MU), and Samsung Electronics Co., Ltd., today announced that Microsoft Corp. has joined the consortium. The HMCC is a collaboration of original equipment manufacturers (OEMs), enablers and integrators who are cooperating to develop and implement an open interface standard for an innovative new memory technology called the Hybrid Memory Cube (HMC). Micron and Samsung, the initial developing members of the HMCC, are working closely with Altera, IBM, Open-Silicon, Xilinx and now Microsoft to accelerate widespread industry adoption of HMC technology.

The technology will enable highly efficient memory solutions for applications ranging from industrial products to high-performance computing and large-scale networking. The HMCC's team of developers plans to deliver a draft interface specification to a growing number of "adopters" that are joining the consortium. Then, the combined team of developers and adopters will refine the draft and release a final interface specification at the end of this year.

Adopter membership in the HMCC is available to any company interested in joining the consortium and participating in the specification development. The HMCC has responded to interest from more than 75 prospective adopters.

As envisioned, HMC capabilities will leap beyond current and near-term memory architectures in the areas of performance, packaging and power efficiencies, offering a major shift from present memory technology. By opening new doors for developers, manufacturers and architects, the consortium is committed to making HMC a new standard in high-performance memory technology.

"HMC technology represents a major step forward in the direction of increasing memory bandwidth and performance, while decreasing the energy and latency needed for moving data between the memory arrays and the processor cores, " said KD Hallman, General Manager of Microsoft Strategic Software/Silicon Architectures. "Harvesting this solution for various future systems could lead to better and/or novel digital experiences."

One of the primary challenges facing the industry -- and a key motivation for forming the HMCC -- is that the memory bandwidth required by high-performance computers and next-generation networking equipment has increased beyond what conventional memory architectures can provide. The term "memory wall" has been used to describe this dilemma. Breaking through the memory wall requires architecture such as the HMC that can provide increased density and bandwidth at significantly reduced power consumption.
 

Razgreez

Member
I just wanted to post Microsoft PR:

Look at the actual design of the HMC. It's a standardized specification for stacking ram with a necessary logic layer using TSV. For PCs, phones etc you need such standardized specifications. For consoles you don't. These consoles will have their own HMCs already just not built to the exact HMC specification. It's the benefit of building a bespoke system
 

Ashes

Banned
Look at the actual design of the HMC. It's a standardized specification for stacking ram with a necessary logic layer using TSV. For PCs, phones etc you need such standardized specifications. For consoles you don't. These consoles will have their own HMCs already just not built to the exact HMC specification. It's the benefit of building a bespoke system

Bandwidth.
 

Razgreez

Member
Your point being? Obviously it's in the search for more efficient bandwidth, that's kinda a given. Efficiency is the focus of all design these days.
 

Ashes

Banned
Your point being? Obviously it's in the search for more efficient bandwidth, that's kinda a given. Efficiency is the focus of all design these days.

I don't mean to come across as confusing and/or puzzling, but that's Microsoft PR speak, talking about memory walls, and bandwidth limitations, and all sorts. Not some convoluted half-way measure -that doesn't even reach GDDR5 levels even if you somehow combine it all. This is a realistic industry wide supported solution or at least seeking one.
 

Ashes

Banned
So you are saying Orbis is either not using GDDR5 or not reaching the 192 GB/s?

no. Either they are using GDDR5 and meeting their target bandwidth or they are not using GDDR5 and will - using this - reach their target, 192 GB/s if you will.
Actually I think he is saying, PS4 and Xbox3 will both use this. And that's that.
 
Jeff says alot of things.

There is absolutely no evidence that either console is using stacking. PS4 apparently has its GDDR5 for its bandwidth and the 720 has its scratchpad for its.
 

Ashes

Banned
Jeff says alot of things.

There is absolutely no evidence that either console is using stacking. PS4 apparently has its GDDR5 for its bandwidth and the 720 has its scratchpad for its.

Let me get this straight. You believe rumour and speculation over talks from firms actually trying to solve the memory bandwidth problems?*

And rumour and speculation from second hand sources, and dev kits that those same sources claim has improved in leaps and bounds?

And this over what Sony CTO himself said?

And this over what Sony did with Vita?




*And are seemingly reaching a solution of sorts.
 

gofreak

GAF's Bob Woodward
Let me get this straight. You believe rumour and speculation over talks from companys actually trying to solve the memory bandwidth problems?

And rumour and speculation from second hand sources, and dev kits that those same sources claim has come up in leaps and bounds?

And this over what Sony CTO himself said?

And this over what Sony did with Vita?

There's lots of evidence that Sony wants or wanted to use stacking but no evidence yet that they are.
 
Let me get this straight. You believe rumour and speculation over talks from firms actually trying to solve the memory bandwidth problems?*

And rumour and speculation from second hand sources, and dev kits that those same sources claim has improved in leaps and bounds?

And this over what Sony CTO himself said?

And this over what Sony did with Vita?




*And are seemingly reaching a solution of sorts.

What? Plenty of companies are developing plenty of solutions for plenty of problems. What does this have to do with whats going on in these consoles?
 

Razgreez

Member
I don't mean to come across as confusing and/or puzzling, but that's Microsoft PR speak, talking about memory walls, and bandwidth limitations, and all sorts. Not some convoluted half-way measure -that doesn't even reach GDDR5 levels even if you somehow combine it all. This is a realistic industry wide supported solution or at least seeking one.

That PR is taken directly from Micron themselves. HMC has long been on the cards because stacking has long been on the cards. The difference is no single company could push forward with an industry wide standard. We will see HMC in our PCs eventually because of this standard however, the stacked memory in the coming consoles will likely still be more specialized since they will be on-die. It's the equivalent of Intel and AMD releasing their next gen cards with stacked main memory on-die meaning you'd not be able to change it
 

gofreak

GAF's Bob Woodward
Not 2.5d, as I've argued but you will see on this very page what they have already done with Vita. Apparently.

edit: started from here.

http://www.neogaf.com/forum/showpost.php?p=46761114&postcount=7498

Oh I know they've already used a kind of stacking for Vita.

I mean for PS4. Lots of evidence they want to use it for PS4 but none yet that they are. Despite arguably leading on that kind of design for mobile chips with Vita's, it's still a different kettle of fish for a larger design for a home console.

In terms of Amkor, their 'involvement' in consoles doesn't indicate anything about stacking for next-gen - Amkor has been involved in consoles for a long time.

edit - if we're arguing about whether Sony will ever use stacking for PS4, I think for sure, they will...I'm talking about just whether the first version will or not.
 

Ashes

Banned
Oh I know they've already used a kind of stacking for Vita.

I mean for PS4. Lots of evidence they want to use it for PS4 but none yet that they are. Despite arguably leading on that kind of design for mobile chips with Vita's, it's still a different kettle of fish for a larger design for a home console.

In terms of Amkor, their 'involvement' in consoles doesn't indicate anything about stacking for next-gen - Amkor has been involved in consoles for a long time.

Yes. This is the point though. That's where we are. All the way from what the CTO said to what happened with the PS Vita.

They haven't yet done 2.5d tsv. And so we remain on that point.

GDDR5 is not guaranteed. But the pursuit is there for stacking.
 

Ashes

Banned
That PR is taken directly from Micron themselves. HMC has long been on the cards because stacking has long been on the cards. The difference is no single company could push forward with an industry wide standard. We will see HMC in our PCs eventually because of this standard however, the stacked memory in the coming consoles will likely still be more specialized since they will be on-die. It's the equivalent of Intel and AMD releasing their next gen cards with stacked main memory on-die meaning you'd not be able to change it

Agreed. But What hardware does Microsoft make? And secondly, if they already have a solution for their bandwidth woes, why are they in that consortium?

To be honest, I don't know what they're doing there. Just rolling the dice.

edit - if we're arguing about whether Sony will ever use stacking for PS4, I think for sure, they will...I'm talking about just whether the first version will or not.

Too big an improvement though. They'll have to neutre the savings or something. This isn't a scaling down operation. This provides greater bandwidth, reduces latency, saves power. And Sony being a hardware vendor, need to go in this direction anyway.. so, their hundreds of millions will have to be spent anyway...

But I think we are agreed though. It's possible, but the schedule for my mind is too tight.
 

Ashes

Banned
On that note, I want to put forth a different suggestion. Sony's R&D might be a million times better than my googling skills but I think this is kinda cool.

http://youtu.be/Hm5fXj-hUpk

Probably, well almost definitely too late, for ps4, if it's coming this year, but I think it's pretty cool nonetheless.
 
Oh I know they've already used a kind of stacking for Vita.

I mean for PS4. Lots of evidence they want to use it for PS4 but none yet that they are. Despite arguably leading on that kind of design for mobile chips with Vita's, it's still a different kettle of fish for a larger design for a home console.

In terms of Amkor, their 'involvement' in consoles doesn't indicate anything about stacking for next-gen - Amkor has been involved in consoles for a long time.

edit - if we're arguing about whether Sony will ever use stacking for PS4, I think for sure, they will...I'm talking about just whether the first version will or not.
That's the cautious view but we are walking the literature release dates mentioning game consoles using interposer and stacked RAM to November 2012 and production is starting within a month or two so while there is no proof it's pretty much a lock.

Besides literature we also have the following:

1) Thebe is on LPM silicon not performance silicon (Jaguar CPUs with a Kabini/Temesh/Samara/Pennar APU base design) speculation, everything below follows
2) LPM silicon will have clock speeds BELOW 2 Ghz (Jaguar can have a turbo mode of 2.4 Ghz for short periods.)
3) This includes the memory controller clock speed!!!! (unless the GNB is 20nm)
4) A Wide IO buss is REQUIRED. You can't use GDDR5 as that requires a higher memory clock speed.
5) Even with a wide IO buss of 512 bits, memory bandwidth will be about 100 GB/sec.

LPM silicon and Power modes for XTV and IPTV streaming create parameters that impact the design. Low Power mobile and Jaguar CPUs for always on power efficiency to serve handhelds and tablets. (see below 6 billion) game consoles always connected devices in the digital home

Another interesting fact in the Yole presentation that has the PS4 interposer/512 bit stacked memory (102GB/sec) is that game controllers will be 8% (starting in 2016 with the figure 2017) of the interposer market (PAGE 16 & 17 Game Stations (starting in 2013 with 11% by 2017) and Controllers (8% starting in 2016)). I was surprised at that as justifying that would be a very intense multi-chip multi-function circuit. It may be that the controller is portable outside the home, is the new walkman that S3D head mounted glasses plug into. With ATSC 2.0 is mobile Digital TV also.

That Yole can provide the above % specs for interposer in game consoles and controllers means they are getting the information from the packagers and the Yole speculation was wink wink speculation.....

In any case I think the accessories are the way Sony and Microsoft are going to compete with PCs not Tflops with which they can't hope to compete. So a low cost optimized near future AMD notebook design with a min mentioned by Sony of 5X PS3 memory bandwidth and 10X PS3 GPU to support 4K TV glassless S3D.

Look at the following and understand that Sony (speculation) and Microsoft plan to support/serve tablets with their game consoles always connected devices in the digital home and Sony has plans for Gaikai also. Read all the text! 6.4 billion phones and tablets between 2012 and 2016, isn't the above now obvious.

I634RfM.jpg
 
Top Bottom