Terrible Majesty
Member
PS4 = stretchy pants
XB1 = size 28 mens jeans
how big is the person?
PS4 = stretchy pants
XB1 = size 28 mens jeans
I find no fault with the design of XONE.
Policies and DRM is the only hurdle. As of now a mental hurdle. Athe time of concept they rolled with what they thought was the wise solution.
Snapping and gaming sounds very next gen, to be honest with you. Nintendo has survived with weaker graphics.
If Sony stuck with 4GB would that have pulled off 1080p. Every industry really is a game of inches.
how big is the person?
Ty for the info.
I will now show off with my friends.
With better development tools, will game data be 1:1 then issues with resolution. Or due to low esram will game data AND resolution always be an issue?
Scale of game will also factor in, I'm sure.
Now I know this isnt a thread about compute but could someone lay out the advantages of GPU Compute for me?
If its used how exactly does that help devs? I know it frees up CPU processes, but what exactly are the kinds of things that can be done with it?
In the beginning the CPU was the brains of the computer. It was versatile and it needed to be because it had to do all the work. Then along the way we decided that we wanted computers to make pretty pictures too. Well that involved some specialized math and being that the CPU was so versatile, it took up those responsibilities too although not very well. Eventually the need for prettier pictures won out and we realized that we could help the CPU out by having a GPU do the work of creating the pretty pictures instead.
So why is the GPU so much better at making pictures? Well as it turns out, the computing necessary to make pictures is much more limited than what the "I can do anything" CPU does. So designers made chips that only did those few things, but do them very very well.
It also turns out that making a pretty picture is very different than running arbitrary code. The picture can be broken up into smaller and smaller sections, and each section can be worked on independently. So given that the GPU required limited computing capabilities, the designers could make that part smaller, but then duplicate it many many times. So the GPU has limited computing complexity, but what it can do it can do very quickly. On top of that, it can do multiple versions of those computations at the same time.
Unfortunately, traditional code requires the complex coding found in the CPU, and it is very difficult to break it down into independent parts that all can be run at the same time. Well that was until GPU Compute came along. Certain problems that the CPU is used to solving, can be broken down so that they resemble the same kinds of tasks that the GPU does to create graphics. When you do that, you can feed those tasks to the GPU instead of the CPU which has two effects:
- There is one less thing that the CPU has to do so it has time to do other stuff
- The GPU is really good at what it does. It will chew through the tasks you give it like a beaver through a No.2 pencil.
The three big things I've heard that you can offload to the GPU are...
- Physics
- AI, Pathfinding
- Computing directional sound effects
"[In] year 1, most of the titles aren't going to use that rich feature set," Cerny told IGN. "They will be great titles, but they will be leveraging very straightforward aspects of the system such as graphics you can get from the very high bandwidth that we have.
"By year 3 or 4, I think we'll see a lot of GPGPU [General-purpose computing on graphics processing units], which is to say that the GPU will be used for a lot of things not directly tied to graphics. So physics, simulation, collision detecting or ray casting for audio or the like."
"Usually when I talk about this people say, 'but wait, won't that make the graphics worse?'" he continues. "Well, if you look at a frame and everything that's being done in that frame, a lot of phases within that frame – it's like 1/30 of a second – some of these phases don't really use all of the various modules within the GPU. Shadow map generation tends not to use ALU [arithmetic logic units] very much, so it's a really optimal time to be doing all of those other tasks.
"So most likely, 3 [or] 4 years in, once you've taken time to study the architecture you can improve the quality of your world’s simulation without decreasing the quality of your graphics."
Is the power gap between the X1 and the PS4 greater than the gap between the 360 and the PS3?
If so, by how much?
PS4= raging hard-on.
XB1= drunkenly poking at holes while half-mast.
Yes...by a lot...its hard to put numbers on it because the architectures were so different last gen...
Is the power gap between the X1 and the PS4 greater than the gap between the 360 and the PS3?
If so, by how much?
1080p(ounds)
Imagine the system RAM as a semi truck.
Large, can carry a lot of stuff but is slow.
Now imagine eSRAM as 2 door sports car. Fast but small.
Now... lets say a normal 1080p screen render involves 2 people and a full trunk.
A more advanced screen render needs 3 or maybe even 4 people and a full trunk.
It simply won't work. You can use the semi truck, but that'll be much slower. You can also move 2 people at a time, but you won't be able to achieve things like 60fps.
haha thanks for the laffs gaf. so i deduce that xbone is slower. how much slower or faster is ps4?
Turns 1080p into 720p
Yes, this is pretty much "tiling". There are several architectural things built into the One (and PS4) GPUs to make this easier. It will undoubtedly be used by games, and perhaps some already are. But there's a problem doing it on One, which is the speed differential between eSRAM and DDR3. You can pull from eSRAM at 102 GB/s but from DDR3 at only 68 GB/s. Every time you have to go there, your performance suffers.This is gonna sound real dumb, but would it be possible to do a partial frame buffer in the esram and the rest on DDR3 then bring them together?
As I understand it the PS4's 176 GB/s counts both directions. The One's eSRAM is 109 GB/s counting the same way, by bus width. But SRAM can be dual-ported, meaning mere bus width isn't accurate. Microsoft's claim of 204 GB/s for eSRAM should be true, not a PR lie. However, a SRAM cell is still limited by how transistors inside it are switched (they can't be registering data coming from both sides simultaneously), so compared to the simple bus counts, it will be harder to get anywhere near the 204 GB/s "maximum". Microsoft themselves have reported between 133 and 150 GB/s in the real world. This shouldn't have anything to do with heat, though.isn't that 176 gbps already bi-directional? that 102 gbps isn't, and that's why microsoft uses 204 gbps. their mathematical mistake is adding both esram and ddr3 as 271 gbps or some other bullcrap.
again, due to overheating concerns, microsoft is apparently targeting 130-140 gbps on its esram.
In the beginning the CPU was the brains of the computer. It was versatile and it needed to be because it had to do all the work. Then along the way we decided that we wanted computers to make pretty pictures too. Well that involved some specialized math and being that the CPU was so versatile, it took up those responsibilities too although not very well. Eventually the need for prettier pictures won out and we realized that we could help the CPU out by having a GPU do the work of creating the pretty pictures instead.
So why is the GPU so much better at making pictures? Well as it turns out, the computing necessary to make pictures is much more limited than what the "I can do anything" CPU does. So designers made chips that only did those few things, but do them very very well.
It also turns out that making a pretty picture is very different than running arbitrary code. The picture can be broken up into smaller and smaller sections, and each section can be worked on independently. So given that the GPU required limited computing capabilities, the designers could make that part smaller, but then duplicate it many many times. So the GPU has limited computing complexity, but what it can do it can do very quickly. On top of that, it can do multiple versions of those computations at the same time.
Unfortunately, traditional code requires the complex coding found in the CPU, and it is very difficult to break it down into independent parts that all can be run at the same time. Well that was until GPU Compute came along. Certain problems that the CPU is used to solving, can be broken down so that they resemble the same kinds of tasks that the GPU does to create graphics. When you do that, you can feed those tasks to the GPU instead of the CPU which has two effects:
- There is one less thing that the CPU has to do so it has time to do other stuff
- The GPU is really good at what it does. It will chew through the tasks you give it like a beaver through a No.2 pencil.
The three big things I've heard that you can offload to the GPU are...
- Physics
- AI, Pathfinding
- Computing directional sound effects
You know, I'm reluctant to think that all of Microsoft's engineers are incompetent. Which begs the question, why go this route? Why only 32mb of ESRAM? Why not 1-2gigs? Surely it's possible? What's their rationale? Why would Microsoft go down this path when it's CLEARLY going to cause them issues? (From what people are saying).
Would 64mb of esram have fixed that issue or is that something that doesn't even make sense from an architectural or cost perspective?
There's the same amount of memory on each machine, so the size of worlds shouldn't suffer comparatively. If eSRAM isn't used efficiently, though, the low speed of One's DDR3 might mean that textures load more slowly, or that distant models are lower-poly.Since this is a thread where we shouldn't be ashamed of asking potentially obvious questions --
Doesn't the memory differences essentially mean that large, open-world games in particular will suffer even more on the X1? Or is that not related?
As I understand it the PS4's 176 GB/s counts both directions. The One's eSRAM is 109 GB/s counting the same way, by bus width. But SRAM can be dual-ported, meaning mere bus width isn't accurate. Microsoft's claim of 204 GB/s for eSRAM should be true, not a PR lie. However, a SRAM cell is still limited by how transistors inside it are switched (they can't be registering data coming from both sides simultaneously), so compared to the simple bus counts, it will be harder to get anywhere near the 204 GB/s "maximum". Microsoft themselves have reported between 133 and 150 GB/s in the real world. This shouldn't have anything to do with heat, though.
I'm not a dev, so my "consensus" might not mean much. But yes, from the technical stuff I've read Microsoft is exaggerating but not straight-out lying with the 204 GB/s number. As to truly unified memory being more efficient, we did have one apparent confirmation from the Oddworld devs at the number you recall. But that was just one quote, and Microsoft predicted lower (about 155 GB/s, I think?). In that same statement they also said their own DDR3 wouldn't hit 68 GB/s, so I don't think they were just bashing the competition.Ok. So I think we have a consensus that it is indeed 204 GB/s on paper (it's not a lie/PR spin) and it's 133 to 150 in reality.
I'm guessing the GDDR5 has much higher efficiency? I read somewhere that almost all of 176 GBps can be used in reality (remember reading 172 effective, but don't quote me on it).
In regards to Ryse. That is first party.
No matter the resolution, multiple games looking like that has to get some of you excited.
As I understand it the PS4's 176 GB/s counts both directions. The One's eSRAM is 109 GB/s counting the same way, by bus width. But SRAM can be dual-ported, meaning mere bus width isn't accurate. Microsoft's claim of 204 GB/s for eSRAM should be true, not a PR lie. However, a SRAM cell is still limited by how transistors inside it are switched (they can't be registering data coming from both sides simultaneously), so compared to the simple bus counts, it will be harder to get anywhere near the 204 GB/s "maximum". Microsoft themselves have reported between 133 and 150 GB/s in the real world. This shouldn't have anything to do with heat, though.
Functionally, GPU compute is feasible on both machines; neither is incapable or lacking any tools. The differences are twofold: first, the PS4 simply has more GPU power, so can run more compute jobs alongside the same amount of graphics jobs...and in some cases more compute jobs alongside more graphics jobs too.If the XB1 does not have the right tools for GPU compute, will that have an effect on games on PC and PS4? Will this generation be limited because of the weak GPU in the XB1? ...What I'm trying to say is can developers make a game that utilizes GPU compute and be able to make an XB1 version of the game that is not heavily reliant on it? Or will it have to be left out altogether on all platforms?
But not at 720p. I can accept it for now but 2014 going 720p is just sad and bad if you ask me rather have the devs downgrade the graphics..
But hey every optimization devs do for esramsize will also greatly benefit PS4 either by lowering bandwidth needs so the ps4 can shuffle around even more data sets.
Bolded the size because that is kinda important, shuffling around data in and out of esram will probably not really net you performance benefit on ps4 i think.
I hope 343 can do at least 900p for Halo 5 but 1080p@60 fps would be awesome for multiplayer.
Everything makes sense now.ESRAM means es ram in spanish and in english means is ram
Infuckingcredible....I really cannot wait for devs to start utilising compute. The worlds will become so much more rich...Im gonna be going PS4 this gen but I sorta worry about the Xbox when that starts taking place :/
I hope 343 can do at least 900p for Halo 5 but 1080p@60 fps would be awesome for multiplayer.
Since he said laymans.
Imagine you have some people you need to transport somewhere.
DDR3 is slow but can carry alot of people.
ESRAM is super fast but can't carry many people.
GDDR5 is simultaneously fast and can carry many people.
So while you can balance off the ESRAM and DDR3 to cooperatively ferry people to your destination, it's more complicated. Whereas the Sony solution (Just GDDR5) is easier to deal with and doesn't require alot of thought. You just load people up and transport them over.
The end result is potentially the same but the way you went about it is much different between the two. Which is why people a re saying ESRAM is a barrier to getting optimal performance on the XB1.
I can foresee some problems here. Most of the things that compute does are gameplay changing. For multiplatform games, that could be too much of a hurdle to overcome. If used fully it could make the games on consoles that can make greater use of compute totally different that on consoles that can use it less.
Imagine if Borderlands 3 decides to make full use of compute and their AI is much better on the PS4 than the XB1. That would affect everything from level design to the damage that enemies can do. GearBox would have to balance the game twice instead of once. I'm afraid that while the targeting lowest common denominator doesn't appear to be used for graphics, it will be used for features that impact gameplay.
I can foresee some problems here. Most of the things that compute does are gameplay changing. For multiplatform games, that could be too much of a hurdle to overcome. If used fully it could make the games on consoles that can make greater use of compute totally different that on consoles that can use it less.
Imagine if Borderlands 3 decides to make full use of compute and their AI is much better on the PS4 than the XB1. That would affect everything from level design to the damage that enemies can do. GearBox would have to balance the game twice instead of once. I'm afraid that while the targeting lowest common denominator doesn't appear to be used for graphics, it will be used for features that impact gameplay.
I don't mean to say that eSRAM won't be hot! It's nearly a third of their entire APU transistor budget, so running it at full capacity all the time is certainly part of the reason for all the space and the fans and vents.that is true, but totally disagree with "nothing to do with heat". esram is made up of transistors and capacitors, meaning heat ceiling is definitely considered, especially because it's on the same chip as the cpu and gpu. there's a reason why the xbone is that big and has that many vents. maybe that's the ceiling the esram can handle without losing performance or without the system crapping out. 150 gbps is at 75% efficiency at best, which is really low. yes, transistors can be a design problem because of their nature but then again 75% efficiency just because of the hardware design?
The question was why MS couldn't settle for 2gb or 4gb of GDDR5 the way Sony was planning on doing
This has been a concern of many for months. We just don't know.
The best thing ing for the advancement of compute in consoles right now is for the PS4 to obliterate the xbone, then devs won't have to make a choice. But that isn't going to happen.
However they were "lucky" in that 8GB of GDDR5 became an option. (Luck is the wrong word in my opinion, there is no luck in business, only well managed risk.)
pretty sure it's to do with windows 8 legacy, because wouldn't windows 8 apps have to be re-written to the different memory structures (i.e. all pc's use DDR3 system memory, GDDR5 for GPU memory) so apps are written to the specific memory configuration of DDR3
Looks OK. But this is once again where the eSRAM comes into play. Latency can cause stalls. In a worst case scenario (cache misses and stuff), assuming a 6-20 cycle latency at the given clock speed, the CPU can stall for roughly 8-24 cycles, the GPU for 3-10 waiting for data. More bandwidth doesn't help here in any way, shape or form. With eSRAM, the stalls are much shorter: about one cycle for the GPU and two for the CPU. Which obviously increases the efficiency and means the system performs closer to its theoretical peak.This.
Here is a calculation I found on the internet to debunk this myth. I'm not backing it up since it's not my area of expertise, but it seems to use consistent math. Would like the gurus on this thread to comment on it:
http://systemwars.com/forums/index.php/topic/116470-done-my-research-reports-back-with-verdict-dat-gddr5-ps4/
I don't think this at all. When first party games start to out strip third party by a long way. They will just damage their games as they they will just start to get low scores from game sites. EA and Activision will not allow their brands to become muddied because of Microsoft if the competition have 4 lead on user base.
Doesn't really matter, the OS is a VM anyway, whatever Win 8 apps are compiled for regular PCs would've worked with GDDR5. AMD's hUMA roadmap is that all PCs in the future use GDDR5 and it shouldn't pose a problem for legacy apps.
I think I forgot that Microsoft had a revelation in production that their eSRAM was twice as fast as they designed it to be.As I understand it the PS4's 176 GB/s counts both directions. The One's eSRAM is 109 GB/s counting the same way, by bus width. But SRAM can be dual-ported, meaning mere bus width isn't accurate. Microsoft's claim of 204 GB/s for eSRAM should be true, not a PR lie. However, a SRAM cell is still limited by how transistors inside it are switched (they can't be registering data coming from both sides simultaneously), so compared to the simple bus counts, it will be harder to get anywhere near the 204 GB/s "maximum". Microsoft themselves have reported between 133 and 150 GB/s in the real world. This shouldn't have anything to do with heat, though.