If you ask to developers what they prefer... less but faster ram? or more but slower ram?... what is the consensus?
If you ask to developers what they prefer... less but faster ram? or more but slower ram?... what is the consensus?
Depends on the type of game they are creating tbh.If you ask to developers what they prefer... less but faster ram? or more but slower ram?... what is the consensus?
If you ask to developers what they prefer... less but faster ram? or more but slower ram?... what is the consensus?
@danhese007: In your opinion as an awesome dev, would you prefer to dev with, 2GB GDDR5 or 4GB DDR3? 64 kB on my VIC-20 and I'm golden.
Any GDDR5 that's worthwhile will get hot. That's why cards with GDDR5 are a lot more expensive, have bigger fans, and higher power draws.
I'm not disputing that it gets hot, memory is generally one of the hottest parts on a card. Im disputing the "very" part. Im willing to bet its pretty much in line with the XDR + GDDR3(Which was alot worse) setup. Would cost about the same too.
Which is to say, a lot hotter and more expensive than the DDR3 and DDR4 that are being discussed as alternatives to GDDR5 here.
I wonder what graphic card manufacturers like MSI or EVGA, NVidia, or AMD prefer when they make/produce their cards....
72GB/s memory bandwidth (maximum)
I don't even see why DDR4 even gets brought up. And I see people are stuck on the whole stacking craze, still.
they dont have esram. and the pc environment is completely different. pc cards have to handle up to eyefinity triple monitor resolutions and 200 fps.
anyways, even some pc gpu's could get by with 70gb/s. i want to say 7770's ship with around that amount of bandwidth on gddr5.
Because DDR4 production should be ramping up soon, DDR3 is basically at the end of its life, and for systems expected to live another 6-10 years, DDR4 will be a lot cheaper over the long haul.
they dont have esram. and the pc environment is completely different. pc cards have to handle up to eyefinity triple monitor resolutions and 200 fps.
anyways, even some pc gpu's could get by with 70gb/s. i want to say 7770's ship with around that amount of bandwidth on gddr5.
Edit: Yup, 72 GB/s
http://www.amd.com/US/PRODUCTS/DESKTOP/GRAPHICS/7000/7770/Pages/radeon-7770.aspx#2
Actually now that makes me wonder why ESRAM at all LOL. If the main DDR bandwidth was reasonably enough to max a 7770...
I guess you have to account for CPU bandwidth too?
I hope it wasnt the case it's actually a 128 bit bus to main. I still worry about that a little. But surely not since no devs have correct the 68 GB/s figure.
Because DDR4 production should be ramping up soon, DDR3 is basically at the end of its life, and for systems expected to live another 6-10 years, DDR4 will be a lot cheaper over the long haul.
anyways, even some pc gpu's could get by with 70gb/s. i want to say 7770's ship with around that amount of bandwidth on gddr5.
Edit: Yup, 72 GB/s
http://www.amd.com/US/PRODUCTS/DESKTOP/GRAPHICS/7000/7770/Pages/radeon-7770.aspx#2
Actually now that makes me wonder why ESRAM at all LOL. If the main DDR bandwidth was reasonably enough to max a 7770...
I guess you have to account for CPU bandwidth too?
I hope it wasnt the case it's actually a 128 bit bus to main. I still worry about that a little. But surely not since no devs have correct the 68 GB/s figure.
Just bad thing. Another 12-18 months and it'd be a shoe-in, but it is what it is, and Sony need to move now.
Maybe we'll have a relatively short gen, if the machines are profitable quickly there is less pressure to elongate the gen to get returns, and tech advances in the next couple of years might make a decent jump doable (will need GPUs to get a lot more powerful per watt though)
er no. It's more cost and power efficient to use 128-bit wide buses in combination with GDDR5 memory than 256-bit with >GDDR5 memory.
That's a value card I'm guessing. I doubt x3602 would use 128 bit memory to be honest. But I don't know anymore.
huh? your post doesnt make a lot of sense.
but no, thats not a value card, thats the reference 7770 specifications.
The Sapphire HD 7700 series is a family of graphics cards targeting the enthusiast on a budget and mainstream users looking for increased graphics performance. It is based on the second family of GPUs from AMD built in its 28nm process and featuring the highly acclaimed GCN optimized graphics processing architecture.
Correct but I'd add that at a moderate memory speed you can only stack 4 DDR3 type DRAM on top of each other because of heat. DDR4 is designed to run cooler so at the same clock speed you can stack 8 on top of each other. GDDR5 can't be stacked or if it is, it's running at a much slower clock speed.Because DDR4 production should be ramping up soon, DDR3 is basically at the end of its life, and for systems expected to live another 6-10 years, DDR4 will be a lot cheaper over the long haul.
If you ask to developers what they prefer... less but faster ram? or more but slower ram?... what is the consensus?
Where does the 12-18 months figure come from? DDR4 chips are already in production (although they were produced before the specifications were finalized) but that's similar to the HDMI situation with the PS3
Hey you guys why is Microsoft in the HMC consortium?
edit: oops wrong thread... But at least it might be relevant!
Someone tried to trick Neil (the creative director for The Last of Us) into answering this question
https://twitter.com/Neil_Druckmann/status/292229995705810944
Unlikely to be relevant since roadmap indicates production start at the end of this year whether or not specification is finalized - meaning best case scenario. That leaves little time to ramp up production for either of these consoles
May 9, 2012 -
The Hybrid Memory Cube Consortium (HMCC), led by Micron Technology, Inc. (Nasdaq:MU), and Samsung Electronics Co., Ltd., today announced that Microsoft Corp. has joined the consortium. The HMCC is a collaboration of original equipment manufacturers (OEMs), enablers and integrators who are cooperating to develop and implement an open interface standard for an innovative new memory technology called the Hybrid Memory Cube (HMC). Micron and Samsung, the initial developing members of the HMCC, are working closely with Altera, IBM, Open-Silicon, Xilinx and now Microsoft to accelerate widespread industry adoption of HMC technology.
The technology will enable highly efficient memory solutions for applications ranging from industrial products to high-performance computing and large-scale networking. The HMCC's team of developers plans to deliver a draft interface specification to a growing number of "adopters" that are joining the consortium. Then, the combined team of developers and adopters will refine the draft and release a final interface specification at the end of this year.
Adopter membership in the HMCC is available to any company interested in joining the consortium and participating in the specification development. The HMCC has responded to interest from more than 75 prospective adopters.
As envisioned, HMC capabilities will leap beyond current and near-term memory architectures in the areas of performance, packaging and power efficiencies, offering a major shift from present memory technology. By opening new doors for developers, manufacturers and architects, the consortium is committed to making HMC a new standard in high-performance memory technology.
"HMC technology represents a major step forward in the direction of increasing memory bandwidth and performance, while decreasing the energy and latency needed for moving data between the memory arrays and the processor cores, " said KD Hallman, General Manager of Microsoft Strategic Software/Silicon Architectures. "Harvesting this solution for various future systems could lead to better and/or novel digital experiences."
One of the primary challenges facing the industry -- and a key motivation for forming the HMCC -- is that the memory bandwidth required by high-performance computers and next-generation networking equipment has increased beyond what conventional memory architectures can provide. The term "memory wall" has been used to describe this dilemma. Breaking through the memory wall requires architecture such as the HMC that can provide increased density and bandwidth at significantly reduced power consumption.
Someone tried to trick Neil (the creative director for The Last of Us) into answering this question
https://twitter.com/Neil_Druckmann/status/292229995705810944
I just wanted to post Microsoft PR:
Look at the actual design of the HMC. It's a standardized specification for stacking ram with a necessary logic layer using TSV. For PCs, phones etc you need such standardized specifications. For consoles you don't. These consoles will have their own HMCs already just not built to the exact HMC specification. It's the benefit of building a bespoke system
Your point being? Obviously it's in the search for more efficient bandwidth, that's kinda a given. Efficiency is the focus of all design these days.
... GDDR5 can't be stacked or if it is, it's running at a much slower clock speed. ...The PS4 and I think the Xbox 3 will both use stacked Wide IO Ram...
So you are saying Orbis is either not using GDDR5 or not reaching the 192 GB/s?
Jeff says alot of things.
There is absolutely no evidence that either console is using stacking. PS4 apparently has its GDDR5 for its bandwidth and the 720 has its scratchpad for its.
Let me get this straight. You believe rumour and speculation over talks from companys actually trying to solve the memory bandwidth problems?
And rumour and speculation from second hand sources, and dev kits that those same sources claim has come up in leaps and bounds?
And this over what Sony CTO himself said?
And this over what Sony did with Vita?
There's lots of evidence that Sony wants or wanted to use stacking but no evidence yet that they are.
Let me get this straight. You believe rumour and speculation over talks from firms actually trying to solve the memory bandwidth problems?*
And rumour and speculation from second hand sources, and dev kits that those same sources claim has improved in leaps and bounds?
And this over what Sony CTO himself said?
And this over what Sony did with Vita?
*And are seemingly reaching a solution of sorts.
What?
Plenty of companies are developing plenty of solutions for plenty of problems. What does this have to do with whats going on in these consoles?
I don't mean to come across as confusing and/or puzzling, but that's Microsoft PR speak, talking about memory walls, and bandwidth limitations, and all sorts. Not some convoluted half-way measure -that doesn't even reach GDDR5 levels even if you somehow combine it all. This is a realistic industry wide supported solution or at least seeking one.
Not 2.5d, as I've argued but you will see on thisvery pagewhat they have already done with Vita. Apparently.
edit: started from here.
http://www.neogaf.com/forum/showpost.php?p=46761114&postcount=7498
Oh I know they've already used a kind of stacking for Vita.
I mean for PS4. Lots of evidence they want to use it for PS4 but none yet that they are. Despite arguably leading on that kind of design for mobile chips with Vita's, it's still a different kettle of fish for a larger design for a home console.
In terms of Amkor, their 'involvement' in consoles doesn't indicate anything about stacking for next-gen - Amkor has been involved in consoles for a long time.
That PR is taken directly from Micron themselves. HMC has long been on the cards because stacking has long been on the cards. The difference is no single company could push forward with an industry wide standard. We will see HMC in our PCs eventually because of this standard however, the stacked memory in the coming consoles will likely still be more specialized since they will be on-die. It's the equivalent of Intel and AMD releasing their next gen cards with stacked main memory on-die meaning you'd not be able to change it
edit - if we're arguing about whether Sony will ever use stacking for PS4, I think for sure, they will...I'm talking about just whether the first version will or not.
GDDR5 or not one thing is certain sony will botch their memory
That's the cautious view but we are walking the literature release dates mentioning game consoles using interposer and stacked RAM to November 2012 and production is starting within a month or two so while there is no proof it's pretty much a lock.Oh I know they've already used a kind of stacking for Vita.
I mean for PS4. Lots of evidence they want to use it for PS4 but none yet that they are. Despite arguably leading on that kind of design for mobile chips with Vita's, it's still a different kettle of fish for a larger design for a home console.
In terms of Amkor, their 'involvement' in consoles doesn't indicate anything about stacking for next-gen - Amkor has been involved in consoles for a long time.
edit - if we're arguing about whether Sony will ever use stacking for PS4, I think for sure, they will...I'm talking about just whether the first version will or not.
GDDR5 or not one thing is certain sony will botch their memory