• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS4 Rumors , APU code named 'Liverpool' Radeon HD 7970 GPU Steamroller CPU 16GB Flash

Status
Not open for further replies.

James Sawyer Ford

Gold Member
DDR3=slow as shit and terrible for graphics. GDDR5=much faster and allows much greater graphical capabilities. you only see DDR3 on cheap discount low low end GPUs.

of course, perhaps sony are cheaping out and going with DDR3. but 2GB GDDR5 is infinitely preferable to say 8GB DDR3, which isn't better for anything in any way.

Well, to combat that, Microsoft may go with a hefty amount of eDRAM.
 

i-Lo

Member
DDR3=slow as shit and terrible for graphics. GDDR5=much faster and allows much greater graphical capabilities. you only see DDR3 on cheap discount low low end GPUs.

of course, perhaps sony are cheaping out and going with DDR3. but 2GB GDDR5 is infinitely preferable to say 8GB DDR3, which isn't better for anything in any way.

The weird and highly possible thing could be MS tooting this quantity advantage to general uniformed consumers. Misleading ads are the norm anyway and so I wonder how can you counter something like that down the line. To people, more is always better without context.
 

StevieP

Banned
The weird and highly possible thing could be MS tooting this quantity advantage to general uniformed consumers. Misleading ads are the norm anyway and so I wonder how can you counter something like that down the line. To people, more is always better without context.

GAF is nowhere near immune to this. lol
I look forward to the threads :(
 
The weird and highly possible thing could be MS tooting this quantity advantage to general uniformed consumers. Misleading ads are the norm anyway and so I wonder how can you counter something like that down the line. To people, more is always better without context.

the difference should be immediately apparent in the graphics of the actual games. hard to counter that with "sure, they have better graphics, but we have a ton of incredibly slow RAM!!1"
 

Ryoku

Member
Well yes, but I'm talking about the situation where 8 GB of DDR3 + eDRAM (say, enough to get a 1080p framebuffer) is compared to 4 GB of GDDR5.

What would be better for graphics?

Not sure. I don't know how much eDRAM is required in order to (for the lack of a better phrase) "trump" the speed advantage of GDDR5.
 

KageMaru

Member
That's what I'm expecting from how everything sounds so far.

This would be a bad idea IMO. eDRAM is expensive, you never know if the amount will be enough, and process shrinks looks like they stalled since the edram in the 360 is still 65nm.

I'm thinking a thin OS on a split pool of 2GB GDDR5 and 4GB ddr3 would be more optimal.
 

Auto_aim1

MeisaMcCaffrey
The weird and highly possible thing could be MS tooting this quantity advantage to general uniformed consumers. Misleading ads are the norm anyway and so I wonder how can you counter something like that down the line. To people, more is always better without context.
Average consumers don't care about how much RAM is in the system. It's not something that is usually marketed.
 
This would be a bad idea IMO. eDRAM is expensive, you never know if the amount will be enough, and process shrinks looks like they stalled since the edram in the 360 is still 65nm.

I'm thinking a thin OS on a split pool of 2GB GDDR5 and 4GB ddr3 would be more optimal.

Oh I'm not saying it's a good idea. Just what seems to be the target route based on what we know.
 

coldfoot

Banned
You'll probably need lots and lots of eDram since renderers have changed a lot since 2004, and you need to access multiple render targets before combining them in one final buffer. I'd wager it's better to use the extra die space for GPGPU clusters and just go with GDDR5. It would also make development easier as a bonus.
 

missile

Member
Its also a fallacy to see the division between the regions of PS3 memory as being rigid barriers, the problem was what happened to load/store cycle times when data was accessed in the wrong way by the wrong piece of hardware - essentially it demands very specific data paths as going the wrong way exacts a hideous performance penalty and/or stalling. ...
True. But it's the programmers job to know the system he is working on.

What irritates me quite a bit is how you talk about uniform memory system
vs. non-uniform system in some of your recent posts, while stating somehow
that the former is better than the latter. So I will take on this for the
rest of the post.

The point you've given above doesn't justify the point that a uniform memory
system is any better. If anything, it's better from a programmer's point of
view, it eases the programming model, but it's not better in terms of
overall efficiency, i. e. in scalability, latency, and bandwidth on
multicore systems.

Having each processor on a different bus to its own local memory solves the
uniform memory bus contention problem. Further, you better bring the memory
very close to the processor to reduce latency, see SPEs local store, and to
better not introduce any control logic on them like for example a cache
logic. And since the memory lives on different busses in a non-uniform
memory system one has to use explicit communication (DMA) to access data
from another processor's memory. This usually makes programming such a
system more complex at first instance. But if you look at it, you will see
that the computational efficiency of large problems that one wants to solve
(esp. for computer graphics, physical simulations, etc. ) do heavily depend
on the data layout and memory transfer. So you better be in full control of
how your data is layered and how your data will be transfered throughout the
system. One has to program for the data to gain more computational
efficiency, which seems to be a new concept to many people esp. those coming
from the PCs era only. The induced latency to access non-local memory in a
non-uniform memory system can often be hided by a technique called
multi-buffering, i. e. you DMA'ing in the next data while the processor
operates on the current one. However, this also happens on uniform memory
system. Guess for example why Intel has introduced the prefetch assembler
instructions? Hence, explicit memory transfer (DMA'ing) lets you adapt the
dataflow of your problem much better on a multicore system, which, in
general, will yields higher computational efficiency - if done right,
programming wise.


And it pays off. The design of the Cell processor has lead to the first
PetaFlop computer in the world in 2008;
http://www.top500.org/list/2008/06/100.

Anyways. Let's contrast this to Intel's Larrabee. Guess why has it failed?
It failed because of its cache coherent shared memory model which wasn't
able to deliver the data fast enough to the computational units. Let me give
you an example;

The PowerXCell's 8i peak-performance, counting only the SPEs (neglecting the
PPU) , computes as follows;
8 [SPE@3.2GHz]
= 8*(8 flops * 3.2GHz)
= 8 * 25.6 Gflops(SP)
= 204.8 Gflops(SP)
(SP := single precision)

Now to the interessting part. The PowerXCell 8i processor performs 202
GFLOPS on 4k x 4k SGEMM kernel utilizing 8 SPEs - a 4096x4096 matrix
multiplication in single precision, a well known test to judge the
computational efficiency of a multicore system, since the SGEMM kernel finds
many application in quite a lot of mathematical and physical computation.
Hence, the PowerXCell 8i as well as the Cell/B.E. processor inside the PS3
perform the SGEMM computational kernel with ~99% of its peak-performance! No
other processor in existence can match this number. To put the PowerXCell 8i
in perspective to Larrabee, with respect to the amount of cores, we have to
take two PowerXCell 8i to gain 16 SPEs. Two PowerXCell 8i processors perform
the SGEMM kernel at 406.04 GFLOPS, which amount to ~99% of the theoretical
peak performance of 409.60 GFLOPS, see Daniel Hackenberg - Fast Matrix
Multiplication on Cell (SMP) Systems
http://tu-dresden.de/die_tu_dresden...alyse_von_hochleistungsrechnern/cell//matmul/

Larrabee performs the 4k x 4k SGEMM kernel with 16 cores at 2Ghz with 825
GFLOPS, as was shown by Intel, which is only twice as fast as two PowerXCell
8i processors (16 SPEs) where one has to consider that the vector length
of Larrabee is 16 and that of Cell processor only 4.

What's the theoretical peak performance of the Larrabee configuration Intel
has run the test on?
Here it is;
16 [core@2.0GHz]
= 16*(32 flops * 2.0GHz)
= 16 * 64 Gflops(SP)
= 1024 Gflops(SP)

Now we can compute the efficiency of the SGEMM kernel for Larrabee;
(825 GFLOPS * 100) / 1024 = ~81%

Hence, we have
2 PowerXCell 8i @ SGEMM (4k x 4k) = 406.04 GFLOPS; efficiency = ~99%
Larrabee @ SGEMM (4k x 4k) = 825 GFLOPS; efficiency = ~81%

This shows that Larrabee's computational units get starved for date - a
weak spot of Larrabee's uniform memory architecture. It seems like that
Larrabee's memory model, an implicit cache coherent shared memory model,
can't deliver the data fast enough.

The explicit non-uniform memory model of the Cell processor is what makes
this processor so efficient.

Last but not least, I encourage you to read;
S. Williams, J. Carter, L. Oliker, J. Shalf, K. Yelick,
"Lattice Boltzmann Simulation Optimization on Leading Multicore Platforms",
International Parallel & Distributed Processing Symposium (IPDPS), 2008.
[PDF]: http://bebop.cs.berkeley.edu/pubs/williams2008-multicore-lbmhd.pdf
From the section Summary and Conclusion;
"... Results show that the Cell processor offered (by far) the highest raw
performance and power efficiency for LBMHD, despite having peak
double-precision performance, memory bandwidth, and sustained system power
that is comparable to other platforms in our study. The key architectural
feature of Cell is explicit software control of data movement between the
local store (cache) and main memory. However, this impressive computational
efficiency comes with a high price — a difficult programming environment
that is a major departure from conventional programming. Nonetheless, these
performance disparities point to the deficiencies of existing
automatically-managed coherent cache hierarchies, even for architectures
with sophisticated hardware and software prefetch capabilities. The
programming effort required to compensate for these deficiencies demolishes
their initial productivity advantage. ..."
.

I'm not against uniform memory. If performance is not of utmost importance,
one can use system resources to simplify the architecture for various
reason. The (casual) market somehow becomes saturated by performance. And
since there is enough performance for the casual (gaming) market,
architectures (what you see as a programmer) can be simplified. However,
don't expect major breakthroughs or leaps. Perhaps that's also the reason
John Carmack said: "not all that excited" by next-gen hardware.
 
Yes 4gb GDDR5 would still handily trump 8gb DDR3 and some edram. It's not even close.


Well.. it's Impossible to say without having full details over:

- Edram size and type of vpu architecture
- Bus bandwidth / memory clock for both DDR3 and GDDR5
- QTY of available memory ( without S.O allocated memory or limitation of any kind like shared ram vs separate system+Video ram)
 
DDR3=slow as shit and terrible for graphics. GDDR5=much faster and allows much greater graphical capabilities. you only see DDR3 on cheap discount low low end GPUs.

of course, perhaps sony are cheaping out and going with DDR3. but 2GB GDDR5 is infinitely preferable to say 8GB DDR3, which isn't better for anything in any way.

False, it's better at, holding more stuff.

I cant claim I'm expert enough to know something like "which is better", but I dont think it's clear cut. Especially when you throw EDRAM or VRAM into the equation.

The leaked PS4 specs put it around 200GB of bandwidth, you can get ~100GB on DDR3 with a 256 bit bus. A big difference, but enough to cancel out 4x the storage? One of the things that would likely take a hit if bandwidth limited is AA. Well, I dont know, I dont think I'd mind a little less AA too much in exchange for other big gains.

Frankly, I like the way Xbox is stacking up vs PS4, so far. The actual games reveals should be fascinating though. After a gen where PS3 and 360 are almost exactly equal in every way, I think it'll be a lot of fun to have systems with vastly different strengths out there (maybe a nightmare for multiplatform developers, though).
 

mrklaw

MrArseFace
DDR3=slow as shit and terrible for graphics. GDDR5=much faster and allows much greater graphical capabilities. you only see DDR3 on cheap discount low low end GPUs.

of course, perhaps sony are cheaping out and going with DDR3. but 2GB GDDR5 is infinitely preferable to say 8GB DDR3, which isn't better for anything in any way.

Well, to combat that, Microsoft may go with a hefty amount of eDRAM.

can we spend a little time digging into this?

What specific benefits do you get from GDDR5? bandwidth sure, but do you get latency advantages?

Mobile GPUs using DDR3 are already significantly slower than the same chip with GDDR5 attached, so the GPU is being starved using the slower ram. In that situation, I don't see the point of 'more' if you can't use it.

But how does eDRAM come into it? What specifically in a GPU needs the super high bandwidth?

If its general texture fetching and vertex data to fundamentally draw the scene, then it seems clear that DDR3 is a bad choice, no matter how much of it you have (IMO).

but if basic texture fetching is ok with DDR3, and the GPU *really* needs the bandwidth/latency for shader operations, then maybe having a decent amount of on-chip memory (eDRAM etc) can handle most of those cases with the speed needed, and DDR3 is fast enough for secondary tasks like textures and holding world data in a cache.

Just a thought
 
can we spend a little time digging into this?

What specific benefits do you get from GDDR5? bandwidth sure, but do you get latency advantages?

Mobile GPUs using DDR3 are already significantly slower than the same chip with GDDR5 attached, so the GPU is being starved using the slower ram. In that situation, I don't see the point of 'more' if you can't use it.

But how does eDRAM come into it? What specifically in a GPU needs the super high bandwidth?

If its general texture fetching and vertex data to fundamentally draw the scene, then it seems clear that DDR3 is a bad choice, no matter how much of it you have (IMO).

but if basic texture fetching is ok with DDR3, and the GPU *really* needs the bandwidth/latency for shader operations, then maybe having a decent amount of on-chip memory (eDRAM etc) can handle most of those cases with the speed needed, and DDR3 is fast enough for secondary tasks like textures and holding world data in a cache.

Just a thought


Yeah that's the gist of it, and like you I dont know enough to answer the questions.

One thing I'm pretty sure of, proved with 360 and Xbox, Microsoft engineers are smart, so I'm not worried they dont know what they're doing. The worry is more if they've been sold out over set top box capabilities.

From what I know GDDR5's strong suit isn't latency though, it's poor there. But GPU's dont care about latancy only throughput, where it excels.
 

Durante

Member
Just 2 Bulldozer modules would be rather weak in terms of CPU. Even the revised version still has worse IPC than a Phenom II.

Oh, and I agree with everything missile posted above, except for the infuriating manual line breaks :p. Great post.
 
Does it have to be edRAM? Even the Gamecube used 1T-SRAM and AMD has designs for T/Z RAM aswell. If I remember corretly all those types are faster than edRAM at nearly the same price (at least 1T is).
 

Ashes

Banned
True. But it's the programmers job to know the system he is working on.

I'm not against uniform memory. If performance is not of utmost importance,
one can use system resources to simplify the architecture for various
reason. The (casual) market somehow becomes saturated by performance. And
since there is enough performance for the casual (gaming) market,
architectures (what you see as a programmer) can be simplified. However,
don't expect major breakthroughs or leaps. Perhaps that's also the reason
John Carmack said: "not all that excited" by next-gen hardware.

Something tells me you're a low level coder. Awesome to have you on board.
 
Just 2 Bulldozer modules would be rather weak in terms of CPU. Even the revised version still has IPC than a Phenom II.

It'll be a dream compared to Cell. A dream of unbelievable proportions. Just because you know, it's 20% less than the current top of the line X86, means nothing when it's probably 20X better/easier than Cell.

Anyways Piledriver is dropping soon, and the original rumor was Steamroller. Now the rumor is "Jaguar".
 

Durante

Member
It'll be a dream compared to Cell. A dream of unbelievable proportions. Just because you know, it's 20% less than the current top of the line X86, means nothing when it's probably 20X better than Cell.
Have you ever actually programmed Cell? That 20x number is utterly ridiculous.

Oh, and a 3.2 GHz 2 module Steamroller isn't 20% off from current "top of the line X86". It's not even half as capable as that, and that's when restricting "top of the line" to the consumer market.

Something tells me he isn't, LOL. Cell has it's strengths, but most programmers dont like it, and "efficient" isn't the way I'd describe it, quite the opposite.
That's because you, very clearly, have no idea what you are talking about. Say whatever you will about the difficulty of programming it, but the CBE is still one of the most efficient architectures around.
 

goomba

Banned
Ps4 sounds like how the GameCube compared to the original Xbox. Xbox was brute force with bottlenecks whilst GameCube had much less RAM compared to the xbox but the main memory was fast as fuck for its day.
 

coldfoot

Banned
4xjaguar at 3.2 ghz would also be a decent CPU. It would have 2x the cores and 2x the clock speed off brazos before even considering architectural ipc improvements.
 

mrklaw

MrArseFace
The worry is more if they've been sold out over set top box capabilities.

this is my big worry. MS can smell the living room. More RAM for Gears of War? Nah, its so we can install Windows 8 and record TV shows while you play crappy kinect games.

:shudder:
 
That's because you, very clearly, have no idea what you are talking about. Say whatever you will about the difficulty of programming it, but the CBE is still one of the most efficient architectures around.


At certain, trivial workloads.

Lets see one of it's biggest critics is Carmack, who to trust, you or him...

The IPC of the Xenon/Cell PPE is supposed to be 0.2, vs close to 1 for current X86...

I actually dont think 20x is an exaggeration. In order to out of order alone is insane difference.
 
and weren't people dismissing 256-bit buses as expensive? At 128-bit it'd be even slower.

PS4 is using 256 bit. Now what? Stands to reason if one can do it the other can.

I've changed my mind a bit on 256 bit once it was pointed out that, usually later on the chips will be combined into one big SOC anyway, mitigating the size problems.
 

coldfoot

Banned
They already have DDR3-2800, which gets you to ~90 GB/s. If the rumors are right it will be using DDR4 presumably slightly faster.
I was talking about feasible ddr3 that would be available in large quantities and cheap enough to include in a console that has to be in production in less than a year, hence the keyword mass produced.
If we're talking about exotic memory, might as well take xdr2 into consideration as it has the same chance of going into the Xbox as ddr3 2800




M.v
 
I was talking about feasible ddr3 that would be available in large quantities and cheap enough to include in a console that has to be in production in less than a year, hence the keyword mass produced.
If we're talking about exotic memory, might as well take xdr2 into consideration as it has the same chance of going into the Xbox as ddr3 2800




M.v

I dont really know the "mass production" schedules or feasibility here, DDR3-2800 is being produced, I think MS can order it to be mass produced. I think DDR4 is ramping soon. And it doesnt have to be in production in less than a year unless you also state Wii U must be in mass production right this instant...
 

Durante

Member
Lets see one of it's biggest critics is Carmack, who to trust, you or him...
I have a massive amount of respect for John Carmack, and I'd always "trust" him above myself. However, "trust" is not a good way to assess facts. What about "trusting" peer-reviewed publications?

The IPC of the Xenon/Cell PPE is supposed to be 0.2, vs close to 1 for current X86...

I actually dont think 20x is an exaggeration. In order to out of order alone is insane difference.
Well, if you only count the PPE, than maybe you could get close to 20x for some stupid pointer-chasing workloads. But the PPE offers less than 1/8th of the Cell's actual computation capabilities. The real meat is in the SPEs.
 

CorrisD

badchoiceboobies
So are their any rumours in regards to usage if their ram pools?
I remember reading somewhere that the 720 while having a lot of ram will have what was it, half or less split off for OS intensive tasks like being a DVR, recording gaming, etc.

If Sony only go for 2-4GB they will surely be taking up a good amount for their OS, the XMB used 120+ to start and that isn't doing a whole lot behind the scenes as it is.
It worries me that Sony go for 4GB and end up leaving like 3GB for gaming compared to 4-5+ on the next Xbox.
 
At certain, trivial workloads.

Lets see one of it's biggest critics is Carmack, who to trust, you or him...

The IPC of the Xenon/Cell PPE is supposed to be 0.2, vs close to 1 for current X86...

I actually dont think 20x is an exaggeration. In order to out of order alone is insane difference.

Carmack did not like the PS2, bitched about the Cell in 2007, thinks that the Ipad2 has half the power of the 360/PS3 and did release how many games for the PS3 from his first devkit to now?

Furthermore he like the 360 best even though according to him the PS3 Cell has more total processing power and performs sometimes even better in Rage than the 360. His main concern have been the tools Sony provides - which is really a shame but I guess competing with MS when it comes to software is next to impossible. No I don't trust Carmack anymore he lost to Yerli, Rein and Co a long time ago when it comes to consoles.

It seems to me he did not quite make the shift from PC to consoles - it is a shame because he could have done great things with the 360/Wii/PS3 now he is irrelevant along with id Tech 5. Sorry for the long rant I do believe at first (2006/07) Cell can be harder to utilize but after some time a good developer will benefit more from it. Cell was a paradigm shift and of course this takes time and effort which will pay of in the end. The Last of Us and Beyond show the true power of Cell.
 
Edit:

Yes I read how it said 'emulate' but I'm not so easily convinced that emulating the Cell is as simple as he makes it sound.

I'm not misunderstanding anything, and I even specifically said it's not going to be a half way step to the PS4. WTF is it with people on this site with horrible reading comprehension?

There won't be a ps3.5 with more memory or a different GPU or anything like that. Go on fooling yourself, and others apparently, into believing this.

Also I don't give a crap about MS' powerpoint, that is 2 years old and in no way concrete, but you want to take it as fact anyways.
http://controversy.typepad.com/vide...nfinity-shown-behind-closed-doors-at-ces.html

"According to reliable sources, the Xbox Infinity (which has a slight chance of being called Xbox 365 when it launches) has in fact been revealed to major developers at the 2012 CES! The reason these developers have not mentioned anything to the public or the media is because the developers have signed a contract preventing the console from being discussed publicly. And from what I'm told, the Xbox Infinity is still on schedule to be released in time for Christmas of 2012 in North America. I'm looking forward to seeing the Xbox Infinity in action at the 2012 Electronic Entertainment Expo!"

Xbox365 is the XTV Xbox 361 and is scheduled for Christmas season 2012 in North America

365 is indicating the same that PS3.5 indicates, newer hardware. HDMI pass-thru, low power modes and newer hardware is going to require emulating the older hardware not the 3 PPC processors.

My speculation is based on economics, would it be cheaper for both Microsoft and Sony to use exactly the same SOC....how much would it cost to include 6 plus SPUs Vs. the savings in one SOC for both. Both will have to emulate Xbox360 and PS3 respectively IN ANY CASE. Considering how tiny SPUs are at 32nm, AMD producing everything but the 1PPC+4SPU wafers and IBM assembling the AMD parts with the IBM manufactured PPC & SPUs = Oban and economical.

1) The 2010 Xbox 720 powerpoint is old and some newer indication that Xbox361 is still valid was needed. There are three for Xbox361=Xbox365=Xbox Infinity= Xbox Xfinity=Project 10 all released Christmas season 2012.

2) It is a new SOC with upgraded hardware using PPC processors so it will need to emulate a Xbox 360 not emulate PPC processors. There are two cites and common sense for this.

3) IF Sony is doing the same and the Digitimes rumor would indicate this then Sony will have to do the same most likely with exactly the same AMD hardware. That the Digitimes rumor has a PS4 (Has to be a PS3.5 if released in 2012) with a Kinect interface also supports the same hardware in both as does microsoft-sony.com and sony-microsoft.com. Both the Digitimes rumor and Microsoft domain registration occurred the same month; July 2011. This is the least supported with only one cite by a source that is considered unreliable.

XTV=> is 800 million new CE connected to internet market between now and 2016
PS3 will have HTML5<video>, Playready DRM and WebMAF by end of Oct.
Both Xbox and PS3 will have full browser support by the end of the year for the first time ever. WHY? How are they going to make money? What does this imply?

This is hindsight for the most part given the Xbox 720 powerpoint and a review of old rumors that now make sense.
 

Clear

CliffyB's Cock Holster
missile said:
What irritates me quite a bit is how you talk about uniform memory system
vs. non-uniform system in some of your recent posts, while stating somehow
that the former is better than the latter. So I will take on this for the
rest of the post.

Unified is more cost effective and space efficient, i.e. you can do more with less. That's why its "better".

Nobody is doing this for the programmer's comfort! This is all about economics.

PS. I'm not and never have been a member of the CELL=Shit brigade, and actually fully expected a next gen Cell part to be in the PS4, mainly for compatibility reasons. As far as I can tell, the only reason both MS and Sony are going the way they are is to keep costs down, they both want to push systems based on $50 mass produced SoCs.
 
That's what I'm expecting from how everything sounds so far.

I think they have use Edram.
Without it dont think ddr3 has the bandwidth for 1080p deferred rendering which is almost the only rendering pipeline right now used atleast by the AAA studios.

So yeah microsoft has to provide enough bandwidth for 1080p deferred rendering engines.
I think 4gig Gddr5 and a fast hdd cache would be my preferred setup but im new to the size vs speed discussion on ram. But considering cpu cache is small and a lot of times cpu are standing still im told not sure how it is with the newer ones increased cache sizes and such.
You just want to feed the cpu and gpu data to work with fast.
 

Cyborg

Member
Can someone anwser one simple question:

- If the rumors are true about the specs of PS4 and Xbox 720. Wich console will me more powerful to produce better graphics?

Thank you
 

Durante

Member
Can someone anwser one simple question:

- If the rumors are true about the specs of PS4 and Xbox 720. Wich console will me more powerful to produce better graphics?

Thank you
That's not a simple question at all. However, if the current rumors are exactly true (which is exceedingly unlikely considering how far these systems are from being released), and you're looking only at graphics, PS4 should generally have the upper hand. This is by virtue of its more powerful GPU and high-bandwidth memory interface.
 

onQ123

Member
Can someone anwser one simple question:

- If the rumors are true about the specs of PS4 and Xbox 720. Wich console will me more powerful to produce better graphics?

Thank you

that depends on which rumors you are going by because there is the 20GB super Cell PS4 that can bring us Toy Story 4 right out of Pixar's studio.
 

saladine1

Junior Member
The real question is, are the ram pools dictated by edram if a unified architecture is universally implemented given the fact that ddr7 and ddr4 are compatible in terms of 38 nm wafers with resistors that can reduce latency and throughput by means of bandwidth allocation and 256 bit buses?

That's the real question..
 

onQ123

Member
The real question is, are the ram pools dictated by edram if a unified architecture is universally implemented given the fact that ddr7 and ddr4 are compatible in terms of 38 nm wafers with resistors that can reduce latency and throughput by means of bandwidth allocation and 256 bit buses?

That's the real question..


no the real question is where are you going to get ddr7 from?
 
Status
Not open for further replies.
Top Bottom