• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

VGLeaks Durango specs: x64 8-core CPU @1.6GHz, 8GB DDR3 + 32MB ESRAM, 50GB 6x BD...

pestul

Member
so far i think i'll probably get an ouya before ps4 or the xbawx experience.
Yeah, I think I'll be getting an Ouya just to replace my half-shitty ASUS O!Play media streamer. This thing is just unstable half of the time. At that pricepoint, plus the ability to play games..
 

McHuj

Member
no one talks about the CPU, really is this, a low entry low, low end laptop level CPU, going to be enough? all the tech demos we saw last year were on ultra high end Core i7s.
specially only two core of it for handling stuff like this:


Each of those CPU cores are probably 3-4X more powerful then the Xenon Cores/PPU already at running general-type code. The compute power isn't bad either for thoses CPU, but to get more raw number crunching performance, the APU design should allow for very low latency access to the GPU compute units so performance shouldn't be a concern when thinking about replacing the compute power of the VMX/SPE's.
 
Are you writing this on a cell phone?

Hah saw that too. Actually googled to see if it's some odd UK phrase or something. This post is the top result.

Guess he meant whats the harm.

I turn autocorrect off first thing when I get a new cell phone. Shit is atrocious.

Then I see people constantly complain about autocorrect, and I'm like turn it off!
 
no one talks about the CPU, really is this, a low entry, low end laptop level CPU, going to be enough? all the tech demos we saw last year were on ultra high end Core i7s.
specially only two core of it for handling stuff like this:



.

Well with as much developer input these machines have gotten, Im sure the general conclusion is that CPU strength is sort of inconsequential, compared to the GPU.
 

Durante

Member
Do we have a rough idea of where they stand vs CELL and Xenon? Gflops wise.
GFLOPs wise it should be similar to Xenon, around half a PS3 Cell and 7 times a Wii U.

However, the idea for these new consoles is to move as much hard computational stuff as possible off the CPU and onto the GPU (or dedicated circuitry). This is not always possible, but taking this idea into account -- as well as the fact that these cores will be infinitely easier to use efficiently than those in PS360 -- they should be sufficient to not bottleneck the rest of the systems overly much.
 

szaromir

Banned
no one talks about the CPU, really is this, a low entry, low end laptop level CPU, going to be enough? all the tech demos we saw last year were on ultra high end Core i7s.
My early 2010 laptop with iCore 5 430M @2,27GHz has no problems running console ports and usually Firefox, iTunes, Steam, raptr and other programs in the background. Are CPUs in Orbis/Durango more powerful than the CPU in my laptop? I can imagine it being easily the case but would like to see someone commenting on that), so the real world performance increase over PS360 is going to be big.
 
Well with as much developer input these machines have gotten, Im sure the general conclusion is that CPU strength is sort of inconsequential, compared to the GPU.

Do things only mean what you want when you want? You just bombed dozens of arguments you've made in the past with that one.

If developer input means that, then how doesn't it mean an array of multiple other things?

I just thought it was interesting.
 

gofreak

GAF's Bob Woodward
Do we have a rough idea of where they stand vs CELL and Xenon? Gflops wise.

They should be about 100Gflops.

I think Sony's idea is that the it'll handle the types of thing you would have done on a PPU, great, better & more than Cell (or Xenon) could have, while heavy fp/computation can be done on other things - the dedicated hardware for audio/video/decompression, the GPU, and perhaps that 'compute module', whatever it is.

It'd be interesting to see how single thread performance on a jaguar compares to a Cell Xenon/PPU. I would guess probably the same or better with a lot of code, but it is at a big clock speed disadvantage. Parallelisation will be necessary, and I can see some devs complaining about single-thread performance on these systems (e.g. John Carmack).

edit - durante's kind of covered all the bases :p
 
GFLOPs wise it should be similar to Xenon, around half a PS3 Cell and 7 times a Wii U.

However, the idea for these new consoles is to move as much hard computational stuff as possible off the CPU and onto the GPU (or dedicated circuitry). This is not always possible, but taking this idea into account -- as well as the fact that these cores will be infinitely easier to use efficiently than those in PS360 -- they should be sufficient to not bottleneck the rest of the systems overly much.
Interesting. So on the PC side of things, current CPUs should continue to be more than enough but there may be a significant bump in minimum GPU requirements?
 
Yes, absolutely. A modern quad-core i5 or i7 kicks all sorts of Jaguar ass.

That's good to hear. The thing that makes me the most nervous about upgrading my PC anytime soon is CPU/motherboard and what the next generation will require. Replacing graphics cards once or twice over the gen is a given but I don't want to rebuild an entire machine 2 years later.
 

tipoo

Banned
Isn't that what the audio and video processors are for? Alleviating a lot of CPU resources.

What do you mean, video processor? Graphics card? Those i7s would have had those too...And audio may have hit the PS3/360 processors pretty hard but it takes a few percent of a single modern processor core.

That's not to say I don't think this chip is adequate, most games are very much GPU bound and if they take the time to properly multithread games then 8 Jaguar cores will be more than enough to run them for the next few years.

Just consider what the three core processor in the 360 can do and that came out during the Pentium 4 era, it's a pretty inefficient architecture yet it runs all of todays games. With consoles there is a lot more optimization.

Yes, absolutely. A modern quad-core i5 or i7 kicks all sorts of Jaguar ass.

Even an old Core 2 Quad only limits performance in a limited number of games, most games are very GPU bound.
 
Do things only mean what you want when you want? You just bombed dozens of arguments you've made in the past with that one.

If developer input means that, then how doesn't it mean an array of multiple other things?

I just thought it was interesting.

WTF are you talking about? No, really?
 

nib95

Banned
How long until new cards come out, and CPU's have to be upgraded to not hold back the GPU?

CPUs will be fine even years from now. It's the best Gpus that will have to be upgraded in order to keep up with the best of what consoles offer several years from now.

I mean, compare what a 7800 GT can do versus the PS3, or an 1800XT compared to the 360. Those GPUs are Dinasours comparatively.
 

Krabardaf

Member
To me the interesting part is bothering to do the math.

IMO his scenario is a bit of a best case one - efficiently streaming nearly everything. It's what you would aim for if you are making a large continuous game, but if you are making a small indie game it might load everything into memory, or load things a level at a time. Also in terms of bandwidth I didn't see anything about bandwidth consumed by say writing back, for example rendering on the GPU then having to resolve that texture to main memory, but that would make the end result lower available read bandwidth and lower required memory pool.

Basically the idea if that there is stuff you need right now, stuff you need soon, and stuff you won't need for a while. The stuff you need right now needs to fit in the available bandwidth, the stuff you need soon needs to fit into RAM, the stuff you need eventually can sit wherever.

If you increase the amount of RAM and decrease it's speed you are shrinking that amount of data that is available now and increasing the amount of data that is available soon. But if you need lower bandwidth because your aren't eating through that much data you should also have less data you need to cache as drawing a frame doesn't take that much bandwidth. So, in general, as you increase RAM amount you should increase bandwidth and vice-versa.

Which means either Orbis has too little room for cache and so can't make use of it's bandwidth because it has to go back to disk too much (Orbis also has a faster BR drive IIRC - could be why) or Durango has too little bandwidth to use it's RAM effectively. (Though to be fair eDRAM could change this in some ways)

Now, to be fair, if you have a lot of RAM relative to bandwidth that would accomodate rapidly changing scenes faster - scenes that have very fast movement, instant travel, scenes where the geometry can transform into a Dark World variant, etc. Having a lot of RAM also means you can get by with less efficient streaming and caching and spend more time on other stuff.

Edit: Never mind this edit, it didn't make a ton of sense!

I was literally writing the same thing when my browser crashed, and then i saw your post. :p

3.5Gb may seem a lot for now, but remember it's unified, and games are bound to increase their ram consumption over time. On PC, 4GB of RAM is almost always enough for games, but that's because you also have a large VRAM.
I can easily see 3.5GB as a limiting factor a few years after release, but it's , of course, also a matter of how optimized the games will be.

HDD speed is far over a hundred time slower than GDDR5, or even DDR3. This could easily be an issue on, say, an open world game.

TLDR ; bandwidth doesn't make total amount of RAM irrelevant.
 
WTF are you talking about? No, really?

It's simple really, if your reasoning is that developer input played a major part in the design of these systems, and hence the CPU being Jaguar means nothing bad or at least less good, then you should also apply that reasoning to other aspects.

Like RAM, or any other aspect of the system. Which you don't. So that's what I was talking about. It bugged me a little!

Maybe the fact that the CPU is a Jaguar wasn't so much due to developer input, but due to other external reasons.

CPUs will be fine even years from now. It's the best Gpus that will have to be upgraded in order to keep up with the best of what consoles offer several years from now.

I mean, compare what a 7800 GT can do versus the PS3, or an 1800XT compared to the 360. Those GPUs are Dinasours comparatively.

I was talking about how CPUs sometimes need to be upgraded to allow the GPU to reach it's potential. I say this because for example, If I stick a GTX680 in with my Core 2 Quad it won't cut it. It will be "cpu starved" in a sense, even though my CPU is more powerful than these jaguar CPUs.
 

nib95

Banned
I was literally writing the same thing when my browser crashed, and then i saw your post. :p

3.5Gb may seem a lot for now, but remember it's unified, and games are bound to increase their ram consumption over time. On PC, 4GB of RAM is almost always enough for games, but that's because you also have a large VRAM.
I can easily see 3.5GB as a limiting factor a few years after release, but it's , of course, also a matter of how optimized the games will be.

HDD speed is far over a hundred time slower than GDDR5, or even DDR3. This could easily be an issue on, say, an open world game.

TLDR ; bandwidth doesn't make total amount of RAM irrelevant.

From what I have read, the greater bandwidth means you can access more of that ram per any given frame. So at 60fps Orbis on its rumoured spec can access 3gb ram per frame whilst Durango can access 1gb. So keep that in mind as well.
 
For a variety of reasons, I have not given up on 3D stacking yet.

Why?
1. From what I've read, the current leak could be up to 9-10 months old. That's pretty old. A lot can change in that much time.
2. Microsoft is already invested in the stacking consortium.
3. Stacked memory uses 10% of the energy of equivalent DDR3. This would make it ideal for console use.
4. Depending on the memory design, it can achieve between 128 and 320Gbytes per second bandwidth.
5. Stacked memory is roughly 90% smaller than traditional DDR3 dims. This would be ideal for consoles.
5. Such memory goes into production in the second half of THIS year, which would be about the time consoles go into production (sometime in the summer/early fall).

Overall, bandwidth, physical size, energy consumption, etc... I would think that MS go with this in the final revision. I think the current "leak" is outdated.

While stacked memory would be expensive up front, it would also save a lot of money in the long run, make the console more "future proof", and have some immediate advantages (e.g. the power supply could be smaller. Less silicon would be used in production, etc).

Again, this is based on the hearsay that the current "leak" is about 9-10 months old already.
 

ekim

Member
For a variety of reasons, I have not given up on 3D stacking yet.

Why?
1. From what I've read, the current leak could be up to 9-10 months old. That's pretty old. A lot can change in that much time.
2. Microsoft is already invested in the stacking consortium.
3. Stacked memory uses 10% of the energy of equivalent DDR3. This would make it ideal for console use.
4. Depending on the memory design, it can achieve between 128 and 320Gbytes per second bandwidth.
5. Stacked memory is roughly 90% smaller than traditional DDR3 dims. This would be ideal for consoles.
5. Such memory goes into product in the second half of THIS year, which would be about the time consoles go into production (sometime in the summer/early fall).

Overall, bandwidth, physical size, energy consumption, etc... I would think that MS go with this in the final revision. I think the current "leak" is outdated.

While stacked memory would be expensive up front, it would also save a lot of money in the long run, make the console more "future proof", and have some immediate advantages (e.g. the power supply could be smaller. Less silicon would be used in production, etc).

Jeff, is it you? :p
 

Hana-Bi

Member
For a variety of reasons, I have not given up on 3D stacking yet.

Why?
1. From what I've read, the current leak could be up to 9-10 months old. That's pretty old. A lot can change in that much time.
2. Microsoft is already invested in the stacking consortium.
3. Stacked memory uses 10% of the energy of equivalent DDR3. This would make it ideal for console use.
4. Depending on the memory design, it can achieve between 128 and 320Gbytes per second bandwidth.
5. Stacked memory is roughly 90% smaller than traditional DDR3 dims. This would be ideal for consoles.
5. Such memory goes into production in the second half of THIS year, which would be about the time consoles go into production (sometime in the summer/early fall).

Overall, bandwidth, physical size, energy consumption, etc... I would think that MS go with this in the final revision. I think the current "leak" is outdated.

While stacked memory would be expensive up front, it would also save a lot of money in the long run, make the console more "future proof", and have some immediate advantages (e.g. the power supply could be smaller. Less silicon would be used in production, etc).

Again, this is based on the hearsay that the current "leak" is about 9-10 months old already.

Man, this sounds too good to be true.

I think in this aspect MS will choose the conservative route.
 

Krabardaf

Member
From what I have read, the greater bandwidth means you can access more of that ram per any given frame. So at 60fps Orbis on its rumoured spec can access 3gb ram per frame whilst Durango can access 1gb. So keep that in mind as well.

Of course, that was specifically what i (well, we) was responding to.
Raw bandwidth is apparently greatly in favors of the PS4, but if the 3.5Gb get filled, well you have to fetch datas on the HDD. And the HDD will probably be somewhere between 70 - 100 MB/s. In the absolute best case, because small files access can yields results far worse than that.
If that happen, you loose part of the benefit of the high bandwidth.

With at least 1.5GB of extra ram on the durango, this could benefits some RAM intensive game.
I'm not neglecting the fact that higher bandwidth could also benefits some game on orbis, but just reminding that performance depends on many thing, and that you have to take everything into account, not just specific numbers.

@Starbuck2907, interesting but i doubt it. Budget are tight, and more importantly, if they were to use this, why would the devkits use far slower ram than the final product? They could have gone for GDDR5 in the devkits if they wanted to simulate stacked RAM.
+ also that would be investing in an unestablished technology for a mass consumer device.
 

DopeyFish

Not bitter, just unsweetened
I've been hopeful for an HMC since the beginning

Question is... What will initial cost be? What access times can you achieve in a stack?

Having an HMC would be ballsy, and if it rests on a DSP or whatever, it makes the console ripe for a pretty incredible SoC revision down the line.
 
For a variety of reasons, I have not given up on 3D stacking yet.

Why?
1. From what I've read, the current leak could be up to 9-10 months old. That's pretty old. A lot can change in that much time.
2. Microsoft is already invested in the stacking consortium.
3. Stacked memory uses 10% of the energy of equivalent DDR3. This would make it ideal for console use.
4. Depending on the memory design, it can achieve between 128 and 320Gbytes per second bandwidth.
5. Stacked memory is roughly 90% smaller than traditional DDR3 dims. This would be ideal for consoles.
5. Such memory goes into production in the second half of THIS year, which would be about the time consoles go into production (sometime in the summer/early fall).

Overall, bandwidth, physical size, energy consumption, etc... I would think that MS go with this in the final revision. I think the current "leak" is outdated.

While stacked memory would be expensive up front, it would also save a lot of money in the long run, make the console more "future proof", and have some immediate advantages (e.g. the power supply could be smaller. Less silicon would be used in production, etc).

Again, this is based on the hearsay that the current "leak" is about 9-10 months old already.

From last week

Micron Readies Memory Cube for Debut
 
I'm really dumb/unknowledgable when it comes to GPUs and memory bandwidth. So I'd just like to ask a question for clarification.

- 8 gigabyte (GB) of RAM DDR3 (68 GB/s)
- 32 MB of fast embedded SRAM (ESRAM) (102 GB/s)

So people are saying that the bandwidth of Durango's ESRAM is too slow to do as high resolutions as the Orbis can do based on rumours of Orbis' hardward.

So my question is, would it be possible to render 60% of the screen using ESRAM at 102GB/s, and 40% of the screen using the DDR3 at 68GB/s to emulate the total bandwidth of 170GB/s? Or is that total nonsense? (It probably is, but just wanted to make sure).
 

nib95

Banned
I'm really dumb/unknowledgable when it comes to GPUs and memory bandwidth. So I'd just like to ask a question for clarification.

So people are saying that the bandwidth of Durango's ESRAM is too slow to do as high resolutions as the Orbis can do based on rumours of Orbis' hardward.

So my question is, would it be possible to render 60% of the screen using ESRAM at 102GB/s, and 40% of the screen using the DDR3 at 68GB/s to emulate the total bandwidth of 170GB/s? Or is that total nonsense? (It probably is, but just wanted to make sure).

I'm not sure what you mean by render the screen, but the Esram is only 32mb. In reality it will probably be just enough for 2xMSAA at 1080p and not much else.
 

Krabardaf

Member
I'm really dumb/unknowledgable when it comes to GPUs and memory bandwidth. So I'd just like to ask a question for clarification.



So people are saying that the bandwidth of Durango's ESRAM is too slow to do as high resolutions as the Orbis can do based on rumours of Orbis' hardward.

So my question is, would it be possible to render 60% of the screen using ESRAM at 102GB/s, and 40% of the screen using the DDR3 at 68GB/s to emulate the total bandwidth of 170GB/s? Or is that total nonsense? (It probably is, but just wanted to make sure).

No i don't think it is possible right now, however, tiling will enable larger than 32MB buffers. Just like on the 360.

I won't get technical because i have no personal experience with this, but basically it's about rendering the frame in a varying number of tiles instead of in one pass. You render the tile on the EDRAM/ESRAM, and then pass it to the main memory. When all the tiles are ready, you combine them and send them to the front buffer.
Faster RAM will naturally help the process.

Don't hesitate to correct me if i'm saying something inaccurate on this topic.
 

Krabardaf

Member
There is no need for a chip, 360 already does this as soon as the EDRAM is full if i'm not mistaken. But it's not entirely automatic. Can't provide more detail myself, sorry :)
 

Feindflug

Member
There is no need for a chip, 360 already does this as soon as the EDRAM is full if i'm not mistaken. But it's not entirely automatic. Can't provide more detail myself, sorry :)

Yeah but isn't tiling gonna cost performance? IIRC this is the reason that most devs on 360 that went with sub-HD didn't bother with tiling in the first place.

So yeah a chip dedicated to that (if something like this is possible) will definitely help, on the other hand why put a tiling dedicated chip in the system when you can put 64mb esram? is it that expensive? or is 64mb still not good enough? :p
 

scently

Member
Yeah but isn't tiling gonna cost performance? IIRC this is the reason that most devs on 360 that went with sub-HD didn't bother with tiling in the first place.

So yeah a chip dedicated to that (if something like this is possible) will definitely help, on the other hand why put a tiling dedicated chip in the system when you can put 64mb esram? is it that expensive? or is 64mb still not good enough? :p

The difference here is that you can render to the esram and the ddr3 ram unlike in the 360 where you render to the edram only and then copy and resolve it in the main memory. This should provide devs with much more flexibility than the had developing for 360.
 
The difference here is that you can render to the esram and the ddr3 ram unlike in the 360 where you render to the edram only and then copy and resolve it in the main memory. This should provide devs with much more flexibility than the had developing for 360.

Then tiling is unnecessary?
 

scently

Member
Then tiling is unnecessary?

Yeah. Apparently, comments from those with some knowledge into its function (ERP) it seems like its function is a mixture of the edram on ps2/xbox 360 as well as a way to feed the gpu to improve gpu utilization/efficiency, and other stuffs. We won't know all its purpose until developers get use to it and give use info into its operations.
 
Yeah. Apparently, comments from those with some knowledge into its function (ERP) it seems like its function is a mixture of the edram on ps2/xbox 360 as well as a way to feed the gpu to improve gpu utilization/efficiency, and other stuffs. We won't know all its purpose until developers get use to it and give use info into its operations.

I'm sure all the hardware stuff (sauces) are for easier development, I'm reading this PDF from Tim Sweeney (Epic) and he talks about memory latency, cache coherency, GPGPU, TFLOPS and development costs. It is interesting.
 
Yeah. Apparently, comments from those with some knowledge into its function (ERP) it seems like its function is a mixture of the edram on ps2/xbox 360 as well as a way to feed the gpu to improve gpu utilization/efficiency, and other stuffs. We won't know all its purpose until developers get use to it and give use info into its operations.

The comment has been made that MS looked at how developers were using the 360 over it's life time and built durango using that data, and it appears they have moved away from tiling.

We really need to have a white paper leak like we had with the 360 before it launched.
 

Krilekk

Banned
Genuine leaks corroborated by multiple sources are not rumours. Even if specs change massively, Wham's the barn in comparing what we currently have on the table at this point?

With how it works these days in games journalism I wouldn't take any rumour seriously. One person pretends to know something, next one writes about it, third copies it, another reads it and claims to know it which completes the cycle.
 
Top Bottom