Karak
Member
Another interesting statement made regarding Durango vs 360, inline with the idea of mega textures/meshes
reference here
Lets see that Lionhead demo again
Another interesting statement made regarding Durango vs 360, inline with the idea of mega textures/meshes
reference here
There's one thing that makes no sense to me. According to some of you, Durango's architecture has practically no advantages over the straightforward solution that Orbis employs - in fact, it seems to have a number of disadvantages - apart from the larger memory pool. That's allegedly because of Microsoft's non-gaming ambitions that require almost 3 gigs of RAM, which would not leave enough for games if they went with 4 gigs of GDDR5. So here's the thing I don't understand: if that was really the case, wouldn't it then be simpler to just put 3 gigs of DDR3 there for the system to use, and 3 (or even 4) additional gigs of GDDR5 for the games? The combination of DDR3 and GDDR5 is already common in the PC world, and it would hardly be significantly (if any) more expensive than 8 GB of DDR3 + ESRAM + customized DMEs + more problematic development because of the bottlenecks and the more complex architecture. I mean, if we can see that, surely it wouldn't escape all those Microsoft and AMD engineers.
Either someone is right... Or there will be some shocked gamers come launch.
We could really use more info for a clearer picture.
Theres a tonne of DMA that goes inside a normal computer system, and whislt i am not versed in the specifics of GPU based DMA (the information is not released to the public) I can tell you that it usually bits of silicon that shit on a shared bus to request/write data without tieing up other resources a good example of this is your Ethernet card would probably DMA all the data it gets into a buffer in memory instead of constantly talking to the CPU about it.
I can promise you, that he is legit to some extend. Alpha kits are rumored to be built by the devs itself based on a construction manual including off the shelf parts. Beta kits seem to be property of MS.
Alpha is MS property tooThis is so wrong, if this is what superdae is he just confirmed himself a troll.
No. You are talking about split pools, two memory controllers and two buses. It really isn't that difficult, they couldn't settle with 4GB, but they still wanted a unified pool of RAM. So they went with the solution that is much like the 360.
There's one thing that makes no sense to me. According to some of you, Durango's architecture has practically no advantages over the straightforward solution that Orbis employs - in fact, it seems to have a number of disadvantages - apart from the larger memory pool. That's allegedly because of Microsoft's non-gaming ambitions that require almost 3 gigs of RAM, which would not leave enough for games if they went with 4 gigs of GDDR5. So here's the thing I don't understand: if that was really the case, wouldn't it then be simpler to just put 3 gigs of DDR3 there for the system to use, and 3 (or even 4) additional gigs of GDDR5 for the games? The combination of DDR3 and GDDR5 is already common in the PC world, and it would hardly be significantly (if any) more expensive than 8 GB of DDR3 + ESRAM + customized DMEs + more problematic development because of the bottlenecks and the more complex architecture. I mean, if we can see that, surely it wouldn't escape all those Microsoft and AMD engineers.
I dont believe that for a second.... 3 gig for a OS reservation while 360 uses 32mb i can see them using 512mb to 1.5gb.
What are they using it for so i can play games,stream up my gameplay and store it in the cloud while buffering porn from 20 different browser tabs, To stream later in a picture in picture in the game im playing while storing the latest californication episode.
There's one thing that makes no sense to me. According to some of you, Durango's architecture has practically no advantages over the straightforward solution that Orbis employs - in fact, it seems to have a number of disadvantages - apart from the larger memory pool. That's allegedly because of Microsoft's non-gaming ambitions that require almost 3 gigs of RAM, which would not leave enough for games if they went with 4 gigs of GDDR5. So here's the thing I don't understand: if that was really the case, wouldn't it then be simpler to just put 3 gigs of DDR3 there for the system to use, and 3 (or even 4) additional gigs of GDDR5 for the games? The combination of DDR3 and GDDR5 is already common in the PC world, and it would hardly be significantly (if any) more expensive than 8 GB of DDR3 + ESRAM + customized DMEs + more problematic development because of the bottlenecks and the more complex architecture. I mean, if we can see that, surely it wouldn't escape all those Microsoft and AMD engineers.
There's one thing that makes no sense to me. According to some of you, Durango's architecture has practically no advantages over the straightforward solution that Orbis employs - in fact, it seems to have a number of disadvantages - apart from the larger memory pool. That's allegedly because of Microsoft's non-gaming ambitions that require almost 3 gigs of RAM, which would not leave enough for games if they went with 4 gigs of GDDR5. So here's the thing I don't understand: if that was really the case, wouldn't it then be simpler to just put 3 gigs of DDR3 there for the system to use, and 3 (or even 4) additional gigs of GDDR5 for the games? The combination of DDR3 and GDDR5 is already common in the PC world, and it would hardly be significantly (if any) more expensive than 8 GB of DDR3 + ESRAM + customized DMEs + more problematic development because of the bottlenecks and the more complex architecture. I mean, if we can see that, surely it wouldn't escape all those Microsoft and AMD engineers.
Why would one ever need more than 64k?
Durango dos seem to be going for efficiency,and that shouldn't be underestimated. If Durango gets 90% utilisation of its GPU, and orbis only 80% (entirely made up numbers) then that closes the effective flop gap. I.e Durango may be aiming to achieve similar results to orbis using less raw muscle.
I'm still on the fence regarding tiling. Weren't MS pushing this last time (tiled forward rendering) and it got left behind? What's different this time that devs will actually use it? More direct support in hardware making it easier to implement?
Durango dos seem to be going for efficiency,and that shouldn't be underestimated. If Durango gets 90% utilisation of its GPU, and orbis only 80% (entirely made up numbers) then that closes the effective flop gap. I.e Durango may be aiming to achieve similar results to orbis using less raw muscle.
I'm still on the fence regarding tiling. Weren't MS pushing this last time (tiled forward rendering) and it got left behind? What's different this time that devs will actually use it? More direct support in hardware making it easier to implement?
From the CPU/GPU perspective, the pools would be unified (both the CPU and the GPU access the same memory), they would be only split from the application perspective, which is a good thing in this case. None of the other things would pose more of an obstacle than they do in PS3, Vita, GameCube or any other system that's ever used multiple memory types.
think they wanted LZ decompression for DXT texture data. The quoted 200MB/s compressed stream is 30% faster than a single core on my 2600s based workstation. They'd need two jaguar cores to get that kind of performance.
The 200MB/s compressed data would decompress to 300-400MB/s DXT data or 300-800MTexels, 5-10 Mtexel per 60 Hz frame. - Probably fast enough by any measure.
Is VGLeaks doing an article today? Seemed to have the notion in my head that it would be the audio processor.
Is VGLeaks doing an article today? Seemed to have the notion in my head that it would be the audio processor.
That block diagram looks amazingly complex.
I've seen this "plane" things mentioned in previous discussions, are there already infos on that ? From the last picture, it looks like a simple overlay/merging of images, but there are two planes for "title" and one for "system". Since both "title" images are merged, I suppose it can't be 3D, so maybe augmented reality ? (the A*C1 + (1-A)*C0 also suggests transparency, though)
Durango dos seem to be going for efficiency,and that shouldn't be underestimated. If Durango gets 90% utilisation of its GPU, and orbis only 80% (entirely made up numbers) then that closes the effective flop gap.
Wait... there is nothing new that I can see on their site. Sangreal, where did you get that image?
Durango dos seem to be going for efficiency,and that shouldn't be underestimated. If Durango gets 90% utilisation of its GPU, and orbis only 80% (entirely made up numbers) then that closes the effective flop gap. I.e Durango may be aiming to achieve similar results to orbis using less raw muscle.
I'm still on the fence regarding tiling. Weren't MS pushing this last time (tiled forward rendering) and it got left behind? What's different this time that devs will actually use it? More direct support in hardware making it easier to implement?
One new tidbit in that diagram is output bandwidth from the GPU seems to be capped in a way read bandwidth isn't.
i.e. even if you want to write to both DDR3 and eSRAM, you can never exceed 102GB/s of output writes.
Oh, you found the secret back door! Sneaky.
That indicates to the esram. The ddr3 is still 68.
so we can only have two layers of parallax? Thats a bit shit in a modern game. Or maybe one can be a mode 7 background, with a scrolling playfield over the top? That'd be pretty neat.
Looks like they will have something soon.
Yes, each Jaguar module has 4 cores. We've known this for a while. (Well, "known" as much as we "know" any of these things)Hmmm 2 CPU modules !? ��
Well, to the 'GPU memory system'. But what I mean is, you cannot 'combine' the memory pools to exceed 102GB/s of output bandwidth. You can read more than that, but not write.
I don't really get the point of display planes, at least from that illustration.
If it's a software feature, then why limit it to 3 and why make such a huge fuss about it?
If it's a hardware feature, then why? How long does a modern GPU take to scale and blend an image, a few microseconds?
Yes, each Jaguar module has 4 cores. We've known this for a while. (Well, "known" as much as we "know" any of these things)
I don't really get the point of display planes, at least from that illustration.
If it's a software feature, then why limit it to 3 and why make such a huge fuss about it?
If it's a hardware feature, then why? How long does a modern GPU take to scale and blend an image, a few microseconds?
I think the display plane has to do with rendering or sending video to different device but not with all the information at once I think.
Or it could be a reference to the their roadmap leak which indicated that you can be paying a game while having something like a news feed on the same screen.
There was a patent on display planes by MS and that seem to indicate that a game can render at a certain resolution while its UI can render at a different resolution.
Whatever they are doing with it, I am sure its necessary as it adds to the BOM.
Here's a nice beyond3d discussion on someones interpretation of display planes... not suggesting its right but interesting none the less..
http://www.neogaf.com/forum/showpost.php?p=47577946&postcount=819
Wrong link?
I don't really get the point of display planes, at least from that illustration.
If it's a software feature, then why limit it to 3 and why make such a huge fuss about it?
If it's a hardware feature, then why? How long does a modern GPU take to scale and blend an image, a few microseconds?
Gemüsepizza;47580706 said:This 170GB/s entry is really misleading. I don't know if this is intentional, but it's 102GB/s for 32MB eSRAM and 68MB/s for 8GB DDR3. Why didn't they use 2 arrows in this illustration? They should have known that you can't add those numbers.
I don't really get the point of display planes, at least from that illustration.
If it's a software feature, then why limit it to 3 and why make such a huge fuss about it?
If it's a hardware feature, then why? How long does a modern GPU take to scale and blend an image, a few microseconds?