8 gigs of ram is NOT HAPPENING PEOPLE. Holy crap how many times does this have to be said.
8 gigs of ram would need 32 ram chips on the motherboard. THAT IS NOT HAPPENING. You're going to be topping out at 8 ram chips before the motherboard complexity gets too costly and unruly to deal with.
Unless ram densities double REALLY REALLY soon, next gen is topping off at 2 gigs of ram.
May I ask why? And please don't take this as an insult, but what's your and everybody else's qualification to make such definite statements?
Considering that there are single GPU graphic-cards available with more than 2GB of RAM I should think that in a years time, there would be little issue to put the same amount in a console.
May I ask why? And please don't take this as an insult, but what's your and everybody else's qualification to make such definite statements?
Considering that there are single GPU graphic-cards available with more than 2GB of RAM I should think that in a years time, there would be little issue to put the same amount in a console.
4GB sounds far fetched. Would be incredible to have that much RAM though.All consoles need is 4GB of RAM.
but why would an 8gig ram make SONY bankrupt? We are talking 2013 ..where we might be getting a single module with that much RAM . Like someone pointed out , an 8 gig ram costs well below 100$ in most stores.
FYI --the 256mb XDR used to cost SONY 69$ at PS3's launch. One reason was it wasnt something that was mass produced enough to bring down the marginal costs
May I ask why? And please don't take this as an insult, but what's your and everybody else's qualification to make such definite statements?
Considering that there are single GPU graphic-cards available with more than 2GB of RAM I should think that in a years time, there would be little issue to put the same amount in a console.
I'm sure there are people on here who could explain it better than I could. If they want to chime up and correct anything I say they are more than welcome to. Here goes from my understanding of it all.
GPU cards are sold at a pretty high mark up. Realize AMD is making like a 50 - 60 % mark up on their cards. There's also no need for them to be able to simplify their motherboard design a few years down the line. Since really these cards are replaced every 6 - 12 months.
Plus on a GPU you're running traces from the ram chips to the GPU. On a console you need to run traces to the GPU and CPU, so you're already doubling your complexity.
By going above 8 ram chips, your complexity starts to sky rocket, and your ability to simplify and make cheaper to build "slim" versions harder.
You design a console with the ability to be able to make a cheaper version down the road.
GPU manufacturers aren't designing boards with that in mind.
The reason it's easier for videocards to do it is because there are ram chips on both sides of the PCboard.
Plus on a GPU you're running traces from the ram chips to the GPU. On a console you need to run traces to the GPU and CPU, so you're already doubling your complexity.
By going above 8 ram chips, your complexity starts to sky rocket, and your ability to simplify and make cheaper to build "slim" versions harder.
You design a console with the ability to be able to make a cheaper version down the road.
With 8 AR views more DSP, more server functions, 2 Gigs is a minimum. To keep costs and power use down, minimums will again be a problem for some of us.To be fair they could simplify the design if the assumption is we'll have availability of 512MB chips in 2014.
That said, I don't expect more than 2GB either.
The 3D-IC Alliance is a consortium of integrated circuit designers, developers, and manufacturers. Its objective is to promote standards for three-dimensional integrated circuits (3D-ICs) in order to accelerate their availability and acceptance. Industry-wide standards would allow virtually any semiconductor vendor to implement 3D technology.
Non-members are invited to join the 3D-IC Alliance. Membership is free! We ask only that you be willing to read and comment on new standards as they appear.
The first standard released by the Alliance is IMIS™ (Intimate Memory Interconnect Standard), available for immediate download on the Standards page. This standard reflects our initial focus on the standardization of vertical interconnect requirements such as pad sizing and spacing, interconnect keepouts, and materials. The Alliance intends to define a family of interconnect standards to accommodate embedded interconnect systems and various methods for adding backside or frontside interconnect. In January of 2010, a press release from SEMI praised IMIS™: "This standard lays the cornerstone for memory-to-logic 3D integration and establishes a basis for future collaborative efforts in the industry."
Other groups are also working toward standards for 3D-ICs, but so far our IMIS™ stands alone. Members and non-members alike are encouraged to develop and submit more 3D-IC design standards for consideration.
The 3D-IC Alliance plans to publish additional specifications for ICs and/or wafers that are designed to be stacked and vertically integrated. Any ICs or wafers that are processed and designed to these standards should be 3D integratable by most (if not all) Alliance members. The Alliance may also work to produce specific protocols and signaling and electrical specifications, allowing broad adoption and interchange between various die-level or wafer-level vendors.
Best series of articles on 3D stacking.http://www.electroiq.com/articles/ap/2011/05/rdl-an-integral-part-of-today-s-advanced.html said:3D Integration, where multiple layers of planar devices are stacked and interconnected through the silicon [3]. The resulting decreased chip area results in much shorter global interconnect, which results in less power required to drive signals the shorter distance.
Once the infrastructure is in place, it is hoped that 3D IC technology will reduce both risk and cost through economic benefits such as:
a) reducing the time it takes to design and verify chips at the most advanced nodes;
b) allowing the use of older analog IP blocks rather than having to develop new IP blocks at the most advanced process nodes; and
c) allowing the mixing of normally incompatible technologies (heterogeneous integration).
Three-dimensional technology can enable the integration of current off-chip memory (like L2 cache) onto the processor chip, thereby eliminating some of the slower and higher-power off-chip buses to off-chip memory and replacing them with high-bandwidth, low-latency vertical interconnections. In addition, on-chip memory (embedded) can be fabricated on a separate layer and bonded to the logic functions. Both of these options improve access latency, the former reducing interconnect length from tens of millimeters to tens of microns, and the latter allowing optimization of memory processing on a separate layer.
While stacking of die, such as memory, is made easier by having identical I/O, stacking other chips will require an I/O interface standardization that is not yet in place. In order to mate such die, silicon interposers with RDL (single or double-sided) are used. It is expected that interposers will function as a stop gap until standardization is in place to allow full wafer/die stacking. Recent commercialization announcements of 3D stacked memory have come from Elpida and Samsung. Product announcements using interposers have come from Xilinx (TSMC manufacturing the interposer) and IBM fabricating interposer-based modules for Semtech.
Look at this link and scroll down to the BOM chart of most of today's graphics cards. http://www.investorvillage.com/mbthread.asp?mb=476&tid=11143553&showall=1
Look at the memory cost of the fastest 2GB GDDR5 for the AMD 6970. That is the level of memory that'll be in the next gen consoles.
Late last year 2GB cost $48 X4= $192 just for 8GB memory. And that is manufacturing cost.
I'd also like to know where I can buy 8GB of GDDR5 for under $100 please!
Newegg sales hahahahahaha, my friend got 16GBs for $60...
7870, despite those slides, eats over 200w. Something Pitcairn-level in a console is going to have to be downclocked or some CUs removed for it to happen.
http://www.hardocp.com/article/2012/03/04/amd_radeon_hd_7870_7850_video_card_review/13
Bolded the parts that are patently untrue.The arguments I'm hearing about the # of RAM chips on the motherboard determining maximum RAM haven't been true for a few years as the last Xbox redesign used 2.5D stacking for RAM/CPU and GPU. This reduced chip count, made it more efficient and more reliable. The RAM used was 3D stacked.
Why would that not be an option on a console?
Would that issue not be resolved by using a Fusion chip where the GPU and CPU are in the same die?
At the same time I wonder, why a console could not have a split ram pool as like a PC does. Shouldn't that solve most issues anyway?
I was about to ask, why not just take the small performance hit and use DDR3 (going by this). But their conclusion reminded me, that more ram does not matter if it ultimately results in lower performance.
Bolded the parts that are patently untrue.
Don't want to fight about anything else at this time.
Bolded the parts that are patently untrue.
jeff_rigby said:The arguments I'm hearing about the # of RAM chips on the motherboard determining maximum RAM haven't been true for a few years as the last Xbox redesign used 2.5D stacking for RAM/CPU and GPU. This reduced chip count, made it more efficient and more reliable. The RAM used was 3D stacked.
Don't want to fight about anything else at this time.
http://news.techeye.net/chips/ibm-micron-claim-breakthrough-in-3d-chips said:12/2011 IBM and Micron claim to have made a significant breakthrough in 3D chip manufacturing, as Micron plans to use IBM's new through-silicon vias, or TSVs, to build a Hybrid Memory Cube (HMC).
TSVs are vertical conduits which connect a stack of chips to allow for extremely fast connections, up to 15 times faster than technology available now. It's IBM's 3D manufacturing that has made the product possible. Parts for the Micron kit will be made at IBM's fab in East Fishkill, New York, using its 32nm high-K metal gate process.
Current top devices typically offer speeds of up to 12.8 GB/s, but Micron's HMC should be able to manage 128 GB/s. On top of that, the HMC runs on 70 percent energy compared to existing devices, and on a small form factor that's about 10 percent the size of traditional memory products.
Would that issue not be resolved by using a Fusion chip where the GPU and CPU are in the same die?
At the same time I wonder, why a console could not have a split ram pool as like a PC does. Shouldn't that solve most issues anyway?
Going by Raistlin here, manufacturing for perhaps just the first year with more than 8 chips and then simplifying seems feasible, doesn't it?
Why are you quoting total system power at the wall as the power of the card??
http://www.techpowerup.com/reviews/AMD/HD_7850_HD_7870/24.html
TechPowerUp uses a high-end multimeter in between the PCI bus, connectors, and card to completely isolate it from the system. They're show 115W for Crysis 2 Extreme benchmark and 144W for Furmark.
You are correct, the CPU and GPU are on the same die. The eDRAM is 2.5D stacked onto the CPU and GPU die. GDD3 RAM is off chip. Thanks for the correction.
XCGPU engine
![]()
Point remains though without my incorrect reference to the last Xbox360 itteration. 3D stacking is impacting the size of memory chips and we can't go by old packaging models to determine how much RAM will be on a Console Motherboard.
![]()
3D stacked Memory chips are faster/smaller and more energy efficient.
to expensive and to far away for next gen.
So how does one determine how much weight to give to rumors. There is the OBAN rumor:AMD, not IBM.
So, time for a little speculation. Oban is being made by IBM primarily, so that almost definitively puts to bed the idea of an x86 CPU that has been floating. We said we were 99+% sure that the XBox Next/720 is a Power PC CPU plus an ATI GCN/HD7000/Southern Islands GPU, and with this last data point, we are now confident that it is 99.9+%
http://www.neogaf.com/forum/showthread.php?t=458527 said:In terms of semiconductors, Tsuruta-san picked out emerging ‘through silicon via’ designs. These stack chips with interconnects running vertically through them to reduce length, raise performance and reduce power consumption.
(Note: it's believed the next Power architecture will be a stacked design. This is also the architecture that has been speculated could incorporate aspects of Cell's design, or a next-gen SPU...). It's due in 2013.
Like someone mentioned above, go with a split memory pool , 2 Giggs for CPU DDR3 and 2 Gigs for GPU GDDR5?
just curious as to everyone's thoughts on this
DDR3 is cheap (and slow, compared to GDDR) but... why? You'd still have the issue of a more complex motherboard and difficulty in bringing costs down later as a result. I'd expect all consoles next gen to have a unified memory pool of fast memory (hence 2GB of GDDR5). Could be wrong, of course, but it's what I expect.
I am not really sure what the function of the EDRAM was in 360 but someone told me it aided in the implementation of AA. Thus, are we to see EDRAM of more 10MB be implemented in PS4?
Also, is there a possibility of embedded flash storage dedicated to OS this time around?
It's funny how sometimes as technology develops what was originally a good can become a bad idea. EDRAM for video cards is such a thing.
Having worked a fair amount with the Xbox 360 the last 2.5 years I find that EDRAM mostly is standing in the way, and rarely providing any benefit.
For the next generation consoles chances are we want 1080p, full HDR, at least 4xMSAA and probably many want additional buffers for deferred rendering or other techniques. I don't think it will be possible to embed enough EDRAM to fit all for many games, so if you're designing a future console now and are thinking of using EDRAM, please don't. Or at least let us render directly to memory. Or only let the EDRAM work as a large cache or something if you really want it.
No EDRAM was a failure for the 360, it will most likely not be around next gen, unless the 720 needs it for BC.
http://www.humus.name/index.php?page=News&ID=309
I am not really sure what the function of the EDRAM was in 360 but someone told me it aided in the implementation of AA. Thus, are we to see EDRAM of more 10MB be implemented in PS4?
Also, is there a possibility of embedded flash storage dedicated to OS this time around?
No EDRAM was a failure for the 360, it will most likely not be around next gen, unless the 720 needs it for BC.
http://www.humus.name/index.php?page=News&ID=309
Put me in the camp that believes we'll get 2GB max next Gen. Even if a company could use stacking to store more chips, why eat the extra cost when it won't be properly utilized by 80% of the software released?
The edram in the 360 is mostly used to get around the bandwidth issues for current Gen systems. While it does offer benefits for AA and alpha effects, it also has its drawbacks such as tiling being a requirement when the backbuffer can't fit in the edram.
We likely won't be seeing it in a next Gen system outside of the Wii-U.
So based on one developer, it was a failure? Opinions will vary depending on the developer and their needs. MS just made a mistake of designing it so you have to use it.
Put me in the camp that believes we'll get 2GB max next Gen. Even if a company could use stacking to store more chips, why eat the extra cost when it won't be properly utilized by 80% of the software released?
The edram in the 360 is mostly used to get around the bandwidth issues for current Gen systems. While it does offer benefits for AA and alpha effects, it also has its drawbacks such as tiling being a requirement when the backbuffer can't fit in the edram.
We likely won't be seeing it in a next Gen system outside of the Wii-U.
So based on one developer, it was a failure? Opinions will vary depending on the developer and their needs. MS just made a mistake of designing it so you have to use it.
With regards to Tiling method of rendering, is it the reason why PS3 games have V-sync for most third party games as opposed to 360?
Humus isn't "just one developer."
So how does one determine how much weight to give to rumors. There is the OBAN rumor:
http://semiaccurate.com/2012/01/18/xbox-nextxbox-720-chips-in-production/
Just got two independent confirmations about my PS4 x86 article.
99+% sure it is, but not 100%
I laughed at the sheer ignorance behind those comments then, I again laugh now.
With regards to Tiling method of rendering, is it the reason why PS3 games have V-sync for most third party games as opposed to 360?
The edram in the 360 is mostly used to get around the bandwidth issues for current Gen systems. While it does offer benefits for AA and alpha effects, it also has its drawbacks such as tiling being a requirement when the backbuffer can't fit in the edram.
We likely won't be seeing it in a next Gen system outside of the Wii-U.
Like someone mentioned above, go with a split memory pool , 2 Giggs for CPU DDR3 and 2 Gigs for GPU GDDR5?
I was about to ask, why not just take the small performance hit and use DDR3 (going by this). But their conclusion reminded me, that more ram does not matter if it ultimately results in lower performance.
GDDR3's highest density is 128MB chips...so you'd need even more to get 8GB than using GDDR5
DDR3, not GDDR3.DDR3 has 4Gbit density chips already. There is 8Gbit as well, but at that point you're trading clocks for density. DDR3 is slow enough as it is.
If >4GB is so important that you're willing to go with a slow speed, then I'd hope for a big chunk of eDRAM to complement it, but then you're sort of splitting the CGPU perimeter between the eDRAM I/O and the DDR3 (just like 360 does with GDDR3 and its eDRAM).
It would just be so much simpler to have a UMA 256-bit GDDR5 bus and not worry about tiling. Sure that makes reducing the size of a GPU harder, but the goal seems to be a combined CPU/GPU anyway, which naturally boosts the area of the digital circuitry and still makes a wide-bus feasible.
Decisions decisions...
The only remotely possible way it would have DDR-3, is if that cache is separate and additional to, a large chunk of unified GDDR-5 dedicated to the game program and graphics. It could be a bank for running the O/S or other background functions. But it's a long shot. I always said 2-4GB tops, I think 3GB unified GDDR-5 would be ideal but 2GB is the most likely outcome.
People asking for 8GB are absurd. For the life of me, I don't even know what a game console would do with that much ram. 2GB should be enough. For a max res of 1080p, they should have anywhere from 1 to 1.5GB to dedicate to video, depending on how complex the game itself is. Next gen FPS vs bloated TES VI.
I don't think 2GB will be a situation like how the X360 would have turned out to be dramatically gimped if it had shipped with 256MB.
I'm really expecting PS4 to be as good or (hopefully) better than PS3 was back in 2006, at a more acceptable launch price. Ditching the expensive proprietary silicon is a smart move. And a fast BD drive will be cheap as chips.
Just some food for thought here:
1) The Cell is dead. There are no 2 ways about it, despite the (excellent, btw) digging you've done. You won't have a Cell as a primary processor for anything next gen
2) The rumours have been prevalent for over a year now. The "where there's smoke there's fire" rule applies, and I'd wager that there's an *extremely* good chance you'll get an x86 PS4.
3) Oban is for the XBox, and even then the Xbox could end up going all-x86 for all we know.
It's possible that there will be no Cell type SPU in the PS4 CPU IF the next generation Power PC is fast enough and can emulate a SPU. This interpretation of rumors would support "Cell is Dead" but allow backward compatibility.(Note: it's believed the next Power architecture will be a stacked design. This is also the architecture that has been speculated could incorporate aspects of Cell's design, or a next-gen SPU...). It's due in 2013.
Change X86 in the quote above to Power PC and eliminate features used only in PCs.http://www.tgdaily.com/hardware-features/50040-cpu-gpu-apu said:AMD to support Fusion APU architecture, as it combines multi-core x86 processing, memory controllers, a PCI-E interface and massively parallel GPU computing on a single piece of silicon.
"The APU includes hundreds of parallel processing cores that can be [used] for HPC applications through the OpenCL programming framework. Unlike conventional GPU server architectures, APU parallel multiprocessors share the same physical memory space with CPU cores," Pokorny explained.
"As a result, the programming model for APUs is simpler, bottlenecks for data movement between GPU and main memory are avoided and data duplication is eliminated.
Senior AMD VP Rick Bergman - who briefly demoed the processor - explained that the APU (Accelerated Processing Unit) combined CPU, GPU, video processing and other accelerator capabilities in a single-die design.
"The Fusion Family of APUs represent a distinctly powerful processing approach to the evolving digital consumer landscape, where more than 28 billion videos are watched each month online and a thousand pictures are uploaded to social networking sites every second," said Bergman.
"This explosion in multimedia requires new applications and new ways to manage and manipulate data. Low resolution video needs to be up-scaled for larger screens, HD video must be shrunk for smart phones and home movies need to be stabilized and cleaned up for more enjoyable viewing."
Sony does have a history of fuc.... errr.. splitting up the RAM pool. They have done in all their consoles so far. I really hope they don't do that this time around. Let the devs decide how much RAM to use for what.
I agree with you but the AMD Fusion does have multi-media support. So that is not a valid part of the argument. X86 (read current AMD design) is a general purpose CPU which will run hotter and use more energy doing the tasks (games) that a PPC can do. If you are putting both CPU and GPU in the same package this supports, if only for heat, using a PPC. The AMD can't run more than 4 X86 cores at full duty cycle and the OBAN is rumored to have 6 cores (same as one of the AMD Fusion designs). Games often push CPUs to 100% duty cycle but general purpose PC use usually doesn't. The AMD Fusion has hardware support for Codecs and Multi-media support as this is an area that could use large amounts of CPU duty cycle. Then there is backward compatibility.I still say there is less than zero chance that the next Playstation will be x86 based. There are no compelling advantages (why use a general purpose solution for a primarily multimedia focussed system?) and the sheer amount of disruption it would cause to a corporation historically focussed on MIPS/PPC architecture would be very costly.
This needs to be investigated...what did the confirmations say?east of eastside said:Charlie tweeted yesterday the following relating to these rumors:
https://twitter.com/#!/CDemerjian/st...37612361875459 "Just got two independent confirmations about my PS4 x86 article."
I still say there is less than zero chance that the next Playstation will be x86 based. There are no compelling advantages (why use a general purpose solution for a primarily multimedia focussed system?) and the sheer amount of disruption it would cause to a corporation historically focussed on MIPS/PPC architecture would be very costly.
I still say there is less than zero chance that the next Playstation will be x86 based. There are no compelling advantages (why use a general purpose solution for a primarily multimedia focussed system?) and the sheer amount of disruption it would cause to a corporation historically focussed on MIPS/PPC architecture would be very costly.
The maximum chip size for GDDR5 RAM is 256MB. 8 of these makes 2GB.
8 chips and all the data lines between them especially if it uses a 256 bit bus or bigger will make the motherboard compex and expensive.
I think in the next generation the'll use two memory pools: a faster one, something around 1,5GB GDDR5 and a bigger and slower one, with GDDR3, for caching and SO purposes.