I kind of already knew what I was getting, but it was still funny when it happened.
Its fine. You don't have to believe it yet. We can talk in a few months when there is enough "acceptable" evidence to allow you to accept it.
I kind of already knew what I was getting, but it was still funny when it happened.
5 GB GDDR5 versus 7 GB GDDR5 will not yield the difference that some of you are thinking.
If I were to tell you that developers are NOT concerned with the RAM situation and have bigger things to worry about (polygon count, asset creation, learning efficient new methods of coding, relearning x86) would that at all quell the hive mind? If I were to tell you that you are better off bitching about OTHER things (upclocking the GPU, complaining about the relatively low performance of the CPU) would that quell the hive mind?
If you believe the narrative that courting indie developers, giving out free devkits, talking to developers, and putting in 8 GB of GDDR5 (with 5 available to developers AT LAUNCH) is not a gaming machine than you and I are not seeing eye to eye.
Parrots love Robert Downy JR. and Iron Man, but Cyberz pls.
Do people finally believe the rumors or do they think we are all just making shit up to make the Xbox look weaker? It is a media machine first and foremost, and games player second.
Its fine. You don't have to believe it yet. We can talk in a few months when there is enough "acceptable" evidence to allow you to accept it.
So he hasn't actually done enough benchmarking to draw a conclusion but felt compelled to drop a sound bite that sounds like he drew a conclusion. Got it.
I would, too. I'd just rather have his informed, empirically backed conclusion than has speculation.I'd take his speculation other many, many others.
Thuway is very funny sometimes. When that whole DF ram article was the hot topic. He was copping pleas about the amount of ram to devs not mattering and developers saying they didn't care.
http://www.neogaf.com/forum/showpost.php?p=73171956&postcount=677
but when the Xbox One 3gb OS reserve first came out he was singing a different tune
http://www.neogaf.com/forum/showpost.php?p=58359297&postcount=27
So yeah why buy a gf titan when you can buy a gf 670 right..
To be fair though, your still talking about a difference between GDDR5 and DDR3. 5 Gigs of GDDR5 is in fact a ton of RAM for gaming purposes. 5 Gigs of DDR3 is not the same. Whether or not it'll become an issue I couldn't tell you.
I'm pretty sure he was talking about that in the grand scheme of things, it was right after they announced a console but focused on non-gaming features.Thuway is very funny sometimes. When that whole DF ram article was the hot topic. He was copping pleas about the amount of ram to devs not mattering and developers saying they didn't care.
http://www.neogaf.com/forum/showpost.php?p=73171956&postcount=677
but when the Xbox One 3gb OS reserve first came out he was singing a different tune
http://www.neogaf.com/forum/showpost.php?p=58359297&postcount=27
Ruined? Hyperbole much? Has he gotten things wrong? Yes, and so has other, even more reputable sources have. At one point in time, they were getting information on a 200mhz increase. That CLEARLY didn't happen.
But these people have been right several times as well.
the PS4 literally trounces the Xbox 1 in software development (to the point where developers were utilizing PS4 tools to develop games on x1, and there are even talks of 20-30-40fps vs 60fps), ''they're the same'' lol. I'm sure all those complaining about the X1 being a bitch to develop for are all delusional. They could be using similar architecture, but they sure don't have the same quality dev kits.
So yeah why buy a gf titan when you can buy a gf 670 right..
Console devs have operated in such environment for years now. But yeah, Sony had some serious balls calling it supercharged PC atchtecture.It was interesting to read in another thread that a PC dev was having a hard time splitting his main game loop to run across many threads/many cores. It goes to show how relatively weak the Jaguar cores are relative to a modern Intel gaming computer.
I wonder how many other PC devs are going to have a hard time breaking up their gaming code to work successfully over multiple weak cpu cores.
Console devs have operated in such environment for years now. But yeah, Sony had some serious balls calling it supercharged PC atchtecture.
What thread BTW?
I think that was the mentioned in the Planetside 4/Oculus Rift thread.
All this arguing over Specs are pointless, If these console launch and the games are identical on both, the cheaper console will win. The fact some are forgetting that if both console launched at the announce prices the PS4 will always be the cheapest this gen. You would be crazy to think MS is the only one who will drop their price in a year.
Games won't be identical, at least not long after launch, that's for sure.
That shouldn't stop people from enjoying the Xbone versions, not everyone needs to purchase the superior console version.
Cerny elaborated what is meant with the supercharged PC architecture. Do you mind addressing these architecture differences in detail?Console devs have operated in such environment for years now. But yeah, Sony had some serious balls calling it supercharged PC atchtecture.
Do you mean the one with Kinect support?
So when the 360 is a bit easier to develop for than PS3 during its first cycle, John Carmack says ''it's the greatest system ever created!!'', but when the PS4 literally trounces the Xbox 1 in software development (to the point where developers were utilizing PS4 tools to develop games on x1, and there are even talks of 20-30-40fps vs 60fps), ''they're the same'' lol. I'm sure all those complaining about the X1 being a bitch to develop for are all delusional. They could be using similar architecture, but they sure don't have the same quality dev kits.
I like how this man has taken a shine to PSVita, probably lost a bet or something.
Cerny elaborated what is meant with the supercharged PC architecture. Do you mind addressing these architecture differences in detail?
It was interesting to read in another thread that a PC dev was having a hard time splitting his main game loop to run across many threads/many cores. It goes to show how relatively weak the Jaguar cores are relative to a modern Intel gaming computer.
I wonder how many other PC devs are going to have a hard time breaking up their gaming code to work successfully over multiple weak cpu cores.
Console devs have operated in such environment for years now. But yeah, Sony had some serious balls calling it supercharged PC atchtecture.
What thread BTW?
Console devs have operated in such environment for years now. But yeah, Sony had some serious balls calling it supercharged PC atchtecture.
What thread BTW?
What did he say exactly? That CPU and GPU share the same fast memory bus and can exchange data easily? That's the only improvement I can think of over PC. But as far as the CPU goes, it's a total joke.Cerny elaborated what is meant with the supercharged PC architecture. Do you mind addressing these architecture differences in detail?
Well that's also laughable.![]()
Supercomputer.
http://i.imgur.com/LDjOc7K.png
Supercomputer.
Imagine how much better that PC game would run if it could efficiently use all 8 threads in a modern intel CPU? Or are you happy for it to simply brute force it through on one or two threads?
Again, console games will improve PC games by requiring multithreading for optimal performance, and those will port real nice to PCs. You shouldn't be praising lazy devs and criticising 'weak' console CPUs for something that will benefit you directly.
Planetside 2 is a good example. The devs have said that they need to significantly rework the code for PS4, but that rework will feed back into improving the PC performance.
.
http://www.neogaf.com/forum/showthread.php?t=532077What did he say exactly? That CPU and GPU share the same fast memory bus and can exchange data easily? That's the only improvement I can think of over PC. But as far as the CPU goes, it's a total joke.
http://www.amd.com/us/products/technologies/gddr5/Pages/gddr5.aspx
AMD has this image in the link, how does this translate to the PS4/XB1 or is it an exaggerated image?
![]()
Based on internal ASIC power measurement data from the AMD Radeon HD 6490M 1GB GDDR5 @ 750MHz (engine clock)/800 MHz (memory clock) = 16.5W vs. AMD Radeon HD 6470M 1GB DDR3 @ 750MHz (engine clock)/900 MHz (memory clock) = 14.9W. Up to 29% performance improvement measured using 3DMark Vantage Performance scores for the AMD Radeon HD 6490M. AMD Radeon HD 6490M 3DMark Vantage Performance Scores = 1811 compared to 1401 for the AMD Radeon HD 6470M.System Configs: Asus M4A89GTD-Pro/USB3, CPU: AMD Phenom II 965 @ 2.3 GHz Quad Core, 4GB DDR3 1333, Windows® 7 64bit Ultimate, resolution 1280X800.
http://www.amd.com/us/products/technologies/gddr5/Pages/gddr5.aspx
AMD has this image in the link, how does this translate to the PS4/XB1 or is it an exaggerated image?
![]()
Yeah it's misleading, but it's expressing a truth, and not just about AMD hardware: nVidia GPUs see the same kind of performance delta between DDR3 and GDDR5.Man, I love nVidia and AMD's wonky graphs. The image itself says it's showing a 20% difference but it looks like it's almost double because of the 3D columns and the angled axis.
Hmmm, sounds like customized PC architecture moreso than supercharged but whatever.It's been long enough that this specific thing has been talked about on GAF, so after you've read it I think your analysis is new thread worthy.
Yeah it's misleading, but it's expressing a truth, and not just about AMD hardware: nVidia GPUs see the same kind of performance delta between DDR3 and GDDR5.
However, that says little about real-world comparisons of the consoles because it doesn't take into account the eSRAM architecture of the One, which was designed to attenuate the gap.
Thuway is very funny sometimes. When that whole DF ram article was the hot topic. He was copping pleas about the amount of ram to devs not mattering and developers saying they didn't care.
http://www.neogaf.com/forum/showpost.php?p=73171956&postcount=677
but when the Xbox One 3gb OS reserve first came out he was singing a different tune
http://www.neogaf.com/forum/showpost.php?p=58359297&postcount=27
What did he say exactly? That CPU and GPU share the same fast memory bus and can exchange data easily? That's the only improvement I can think of over PC. But as far as the CPU goes, it's a total joke.
Thuway is very funny sometimes. When that whole DF ram article was the hot topic. He was copping pleas about the amount of ram to devs not mattering and developers saying they didn't care.
http://www.neogaf.com/forum/showpost.php?p=73171956&postcount=677
but when the Xbox One 3gb OS reserve first came out he was singing a different tune
http://www.neogaf.com/forum/showpost.php?p=58359297&postcount=27
You know what I love about the "Microsoft is playing catch up" narrative being pushed for the past few months?
At E3 Microsoft had Forza 5 and Ryse on display and they looked a hell of a lot more impressive and close to being finished than Driveclub or Knack.
Lol, his posts do not imply what they do because of the reasons you think. On the Xbox One it's a lot more than just the ram reserve. Their main emphasis was the OS/system features hence the use of DDR3 instead of GDDR5 and Kinect being forced and packed in over providing better hardware. Everything about the console, from design to hardware, including all past leaked documentation points to as much. It's very aim from the beginning was to be an all in one media machine, not just a games console. Thuway is absolutely right about this.
What do you mean its more than just ram reserve. He said that last quote in reference to Xbox One is a ram reserve thread. Taking 3 gigs from developers for OS functions was proof that it wasn't a gaming machine. But potentially taking 3 gigs from developers on the PS4 was not an issue because the developers said it was plenty and the differences between 5 and 7 wouldn't yield that big of a difference. How would that same principle not apply for developers on Xbox One?
I know its DDR3 vs GDDR5 but lets also keep things in perspective here, the X1's memory bandwidth increased over the 360 by the same amount that the 360 increased over the OG Xbox, and that's 3x.
OG Xbox bandwidth: 6.4 GB/s
Xbox 360 bandwidth: 22.4 GB/s = 3x increase over OG Xbox
Xbox One bandwidth: 68.3 GB/s = 3x increase over 360
So to say the Xbox One is less of a gaming machine is disingenuous unless you believe the Xbox 360 wasn't a gaming machine either.
What do you mean its more than just ram reserve. He said that last quote in reference to Xbox One is a ram reserve thread. Taking 3 gigs from developers for OS functions was proof that it wasn't a gaming machine. But potentially taking 3 gigs from developers on the PS4 was not an issue because the developers said it was plenty and the differences between 5 and 7 wouldn't yield that big of a difference. How would that same principle not apply for developers on Xbox One?
I know its DDR3 vs GDDR5 but lets also keep things in perspective here, the X1's memory bandwidth increased over the 360 by the same amount that the 360 increased over the OG Xbox, and that's 3x.
OG Xbox bandwidth: 6.4 GB/s
Xbox 360 bandwidth: 22.4 GB/s = 3x increase over OG Xbox
Xbox One bandwidth: 68.3 GB/s = 3x increase over 360
So to say the Xbox One is less of a gaming machine is disingenuous unless you believe the Xbox 360 wasn't a gaming machine either.
It goes back to the nature of graphics rendering and how the polygons are drawn. Sorry if I'm teaching my grandmother to suck eggs, but it might be a little easier if I outline the graphics pipeline. I'll use a red coloured font to show the video memory transactions and green for system RAM (probably be better as a flow chart but nvm)
On the software side you have your game (or app) ↔ API (DirectX/OpenGL) ↔ User Mode Driver / ICD ↔ Kernel Mode Driver (KMD) + CPU command buffer→ loading textures to vRAM → GPU Front End (Input assembler) .
Up until this point you're basically dealing with CPU and RAM- executing and monitoring game code, creating resources, shader compile, draw calls and allocating access to the graphics (since you likely have more than just the game needing resources). From here, the workload becomes hugely more parallel and moves to the graphics card. The video memory now holds the textures and the shader compilations that the game+API+drivers have loaded, These are added to the first few stages of the pipeline as and where needed to each the following shaders as the code is transformed from points (co-ordinates) and lines into polygons and their lighting:
Input Assembler (vRAM input) → Vertex Shader (vRAM input) → Hull Shader (vRAM input) → Tessellation Control Shader (vRAM input) (if Tessellation is used) → Domain Shader (vRAM input) → Geometry Shader (vRAM input)
At this point, the stream output can move all or part of the render back into the memory to be re-worked. Depending on what is called for, the output can be called to any part of the previous shader pipeline (basically a loop) or held in memory buffers. Once the computations are completed they then move to Rasterization (turning the 3D image into pixels):
Rasterizer → Pixel Shader* (vRAM input and output) → Output Manager (tasked with producing the final screen image, and requires vRAM input and output)
* The Compute Shaders (if they exist on the card) are tasked with post processing (ambient occlusion, film grain, global illumination, motion blur, depth of field etc), A.I. routines, physics, and a lot of custom algorithms depending on the app., also run via the pixel shader, and can use that shaders access to vRAM input and output.
So basically, the parallel nature of graphics calls for input and output from vRAM at many points covering many concurrent streams of data. Some of that vRAM is also subdivided into memory buffers and caches to save data that would otherwise have to re-compiled for following frames. All this swapping out of data calls for high bandwidth, but latency can be lax (saving power demand) as any stall in one thread is generally lost in the sheer number of threads queued at any given time.
As I noted previously, GDDR5 allows a write and read to/from memory every clock cycle, whereas DDR3 is limited to a read or a write, which reduces bandwidth. Graphics DDR also allows for multiple memory controllers to cope with the I/O functions.
The use of GDDR5 is probably mandatory if you note the likelihood of increased complexity in the next generation console games (higher polygon counts, more complex post process image quality). The PS4 will use an AMD APU, which has already demonstrated that it is very sensitive to memory bandwidth, and given the long life cycle of a console it needs a degree of future proofing by adding as much bandwidth as possible.
here here, well said
just becuase the PS4 is better does not make the x1 bad by the next gen standard
Please point out where I said GDDR5 wasn't better. All I was pointing out is that Xbox One isn't any less of a gaming machine because GDDR5 has more bandwidth. The X1 is still a plenty capable piece of hardware in spite of losing to the PS4 in specs.It's not 2005 anymore dude. Next gen graphics require next gen graphical features, physics, simulations and so on that are far more bandwidth centric and demanding. That is why GDDR5 is the defacto choice for video cards today. The only reason Microsoft went with DDR3 is because nobody knew if 4gb sticks of GDDR5 would be available in time, and they needed the large ram quantity for that 3GB OS reserve (so 8GB was essentially a must from the beginning due to system uses being at the fore front of design). Sony lucked out and Microsoft is left with a more limited memory system with much less bandwidth.
With respect to the advantages of GDDR5 over DDR3, there was this awesome post over at TechSpot by a developer.
But that's mainly why Microsoft put the Esram in there. It's a bandage effort solution to the problem, and will certainly help towards alleviating the issue, but not curing it altogether.
Thuway is very funny sometimes. When that whole DF ram article was the hot topic. He was copping pleas about the amount of ram to devs not mattering and developers saying they didn't care.
http://www.neogaf.com/forum/showpost.php?p=73171956&postcount=677
but when the Xbox One 3gb OS reserve first came out he was singing a different tune
http://www.neogaf.com/forum/showpost.php?p=58359297&postcount=27