• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Kotaku Rumor: PlayStation 4 codenamed 'Orbis', 2013, AMD CPU, SI GPU, Anti-Used Games

brain_stew said:
I've never been one for tact, sorry. I don't mean ill but this thread is becoming increasingly bogged down with irrelevant garbage, time to cut the wheat from the chaff.

This thread is purely speculative, so please check your egotism a bit.

Abandoning backwards compatibility with the PSN-platform software represents an enormous business risk in a generation where the quality/range of online services is a critical differentiator.

Not to mention its a generation where the best-selling platform was the one that abandoned the technological arms-race, and opted instead to go with a mildly upgraded version of its (then) current technology.

If you're right, you're right. Congratulations. But acting like the rumoured approach is/was a fait accompli is simply not true.
 
Sorry, GPUs. You cannot simply just keep cranking up texture sizes and expect the GPU fillrate and bandwidth to be able to maintain frame rate.

I was trying to say, if a dev is really using 2GB of textures the GPU is probably going to choke anyhow, unles you start making sacrafices on AA, resolution, effects, etc.

While I understand what you're trying to say, there's more than just textures that would be stored in the 2GB of memory.
 
Just out of curiosity, how likely is it, that they'll go with a 256-bit bus ?

XDR bus sizes are roughly twice as large from a physical standpoint compared to conventional DDR/GDDR because each memory bit signal requires two wires with differential signalling.

Basically, your 256-bit XDR will be about as large as a 512-bit GDDR bus on the perimeter of the GPU/APU/whatever processor.
 
Their reasoning is due to a complex motherboard design. However the rumors point to Sony using advanced manufacturing methods such as silicon interposers, which enable combining multiple pieces of silicon that are produced separately into one die, which would make large bandwidth memory possible without all those pesky motherboard traces.
XDR bus sizes are roughly twice as large from a physical standpoint compared to conventional DDR/GDDR because each memory bit signal requires two wires with differential signalling.

Basically, your 256-bit XDR will be about as large as a 512-bit GDDR bus on the perimeter of the GPU/APU/whatever processor.

So, 256-bit bus is doable given the physical and budget limits? If so, I feel relieved, as it feels like the console systems have been using 128-bit bus for ages. Although, I have to admit, I don't know exactly how much difference there would be in performance between 128- and 256-bit on a similar hardware. =) 128-bit just feels so antiquated.
 
An example of going from 1280x720 to 1920x1080 with 2x AA double buffered is a frame buffer size of ~14MB to ~32MB. I'm not sure how a 18MB increase in required frame buffer size necessitates a 1GB increase in physical VRAM. A larger frame buffer does not mean the textures use more space in video memory.

Larger textures need more space, but I have owned a 1GB card for many year on my PC and yet to see it run out of RAM before pixel pushing horse power.

Epic has said that 4xMSAA requires 4x the amount of video RAM.

Without anti-aliasing, Samaritan’s lighting pass uses about 120MB of GPU memory. Enabling 4x MSAA consumes close to 500MB, or a third of what's available on the GTX 580.

Don't really understand it myself, so until I read about it a little more, I'll use the appeal to authority argument.
 
What are you saying? I was making the argument that next gen CPUs will probably run out of fill rate before they run out of VRAM. How many games can actually go over 1GB at 1080P now? The Skyrim texture pack says min 1GB RAM, which means it won't use more than that.
Skyrim with an official high resolution textures pack (which aren't that high resolution really) consumes about 1,7 GBs of VRAM in 1080p with MSAA 8x. And that's before any uGrids tweaks.
Crysis 2 with an official hi-res textures pack tend to float around 2 GBs of VRAM.
These aren't next gen games, this is current gen with slightly better textures and it already goes beyond 1 GB in 1080p with ease. If we want to have something better in the next generation than the same games but in 1080p then we'll need more than 1 GB of VRAM. If we want to have something better than what you may get on PC today in Crysis 2 and Skyrim then we'll need more than 2 GBs of VRAM. I am personally considering 4 GBs of memory accessable by GPU as a minimum for next generation. And you've probably heard that developers want even more than that.
 
Skyrim with an official high resolution textures pack (which aren't that high resolution really) consumes about 1,7 GBs of VRAM in 1080p with MSAA 8x. And that's before any uGrids tweaks.
Crysis 2 with an official hi-res textures pack tend to float around 2 GBs of VRAM.
These aren't next gen games, this is current gen with slightly better textures and it already goes beyond 1 GB in 1080p with ease. If we want to have something better in the next generation than the same games but in 1080p then we'll need more than 1 GB of VRAM. If we want to have something better than what you may get on PC today in Crysis 2 and Skyrim then we'll need more than 2 GBs of VRAM. I am personally considering 4 GBs of memory accessable by GPU as a minimum for next generation. And you've probably heard that developers want even more than that.

Your expectations may be a slight bit unrealistic
 
Skyrim with an official high resolution textures pack (which aren't that high resolution really) consumes about 1,7 GBs of VRAM in 1080p with MSAA 8x. And that's before any uGrids tweaks.
Crysis 2 with an official hi-res textures pack tend to float around 2 GBs of VRAM.
These aren't next gen games, this is current gen with slightly better textures and it already goes beyond 1 GB in 1080p with ease. If we want to have something better in the next generation than the same games but in 1080p then we'll need more than 1 GB of VRAM. If we want to have something better than what you may get on PC today in Crysis 2 and Skyrim then we'll need more than 2 GBs of VRAM. I am personally considering 4 GBs of memory accessable by GPU as a minimum for next generation. And you've probably heard that developers want even more than that.

But, aren't we talking about consoles with their optimization and all?
 
But, aren't we talking about consoles with their optimization and all?

You can indeed optimize some stuff, and game engines treat VRAM differently and use it differently. But we are seeing 1gig of vram being used more and more frequently.

But I also think the expectations are too high to assume we will get anything over 1.
 
Playstation 5 will be super backwards compatible with the PS4 and possibly forwards compatible. Stick a PS5 game into a PS4 and have it run on lower settings etc. One can dream.
 
I suspect they were meaning that more regarding capacities ... but yeah, could have been worded better.






Interesting ... and is what I suspect the rumored 720 'dual GPU' is more akin to (though they may not be doing any sort of SLI/XFire)

I'd be happy if both systems have a lower power mode for media, etc.

I'd see the on-die GPU being a big and wide Vector co-processor for the CPU and you would be able to take advantage of it in GPGPU scenarios with compute shaders (OpenCL) to assist the OS and gameplay/physics/animation code running on the CPU side. The on die GPU would basically improve on the CELL idea with a different take on the vector processing side.
 
brain_stew said:
We can bank on 2GB of GDDR5
It would be woefully ironic if after all the talk of WiiU being low-spec the "next-gen" consoles end up with the same amount of memory...

Panajev2001a said:
side. The on die GPU would basically improve on the CELL idea with a different take on the vector processing side.
If the talk of hiding everything behind increasingly high-levels of abstraction comes to anything I am really not clear where custom on-die GPU would fit. As it is I am more expecting this to be the most boring generation ever hw-wise.
 
Skyrim with an official high resolution textures pack (which aren't that high resolution really) consumes about 1,7 GBs of VRAM in 1080p with MSAA 8x. And that's before any uGrids tweaks.
Crysis 2 with an official hi-res textures pack tend to float around 2 GBs of VRAM.
These aren't next gen games, this is current gen with slightly better textures and it already goes beyond 1 GB in 1080p with ease. If we want to have something better in the next generation than the same games but in 1080p then we'll need more than 1 GB of VRAM. If we want to have something better than what you may get on PC today in Crysis 2 and Skyrim then we'll need more than 2 GBs of VRAM. I am personally considering 4 GBs of memory accessable by GPU as a minimum for next generation. And you've probably heard that developers want even more than that.

that would be really expensive though....thats the real problem
 
It would be woefully ironic if after all the talk of WiiU being low-spec the "next-gen" consoles end up with the same amount of memory...


If the talk of hiding everything behind increasingly high-levels of abstraction comes to anything I am really not clear where custom on-die GPU would fit. As it is I am more expecting this to be the most boring generation ever hw-wise.

Well, I am not saying they should only hide everything under a high level API, just that the possibility to do quite advanced things with OpenCL is there just as it was with CUDA before, etc... Sony can also do their own compute shader library exposing some more advanced features too.

The custom on-die GPU would fit as vector co-processor to the CPU (compute shaders/OpenCL kernels would allow regular C/C++ code to access this co-processor in a clean and organized manner), not to share rendering workload with the GPU. IMHO, it could be used the same way SPU's were used before... you have a LS, you have some cache, you have tons of registers, and a sea of vector ALU's... hey, we are back to VLIW just like with the VU's, sort of ;).
 
Having an APU in the mix actually solves some problems that would have otherwise existed in relying on GPGPU. The reason stuff like Physx doesn't actually impact gameplay is because getting that information from the GPU back to the CPU so it can interact more meaningfully with the simulation and inputs is basically unfeasible. Having shaders on the same die as the CPU and sharing a memory space should help with those issues.
 
You know, the engineers who are actually in the MS or Sony bunkers designing the new consoles, must piss themselves laughing when they read threads like this on various forums.
 
Skyrim with an official high resolution textures pack (which aren't that high resolution really) consumes about 1,7 GBs of VRAM in 1080p with MSAA 8x. And that's before any uGrids tweaks.
Crysis 2 with an official hi-res textures pack tend to float around 2 GBs of VRAM.
These aren't next gen games, this is current gen with slightly better textures and it already goes beyond 1 GB in 1080p with ease. If we want to have something better in the next generation than the same games but in 1080p then we'll need more than 1 GB of VRAM. If we want to have something better than what you may get on PC today in Crysis 2 and Skyrim then we'll need more than 2 GBs of VRAM. I am personally considering 4 GBs of memory accessable by GPU as a minimum for next generation. And you've probably heard that developers want even more than that.

PC games dont use streaming for textures... all textures are stored in VRAM
 
Having an APU in the mix actually solves some problems that would have otherwise existed in relying on GPGPU. The reason stuff like Physx doesn't actually impact gameplay is because getting that information from the GPU back to the CPU so it can interact more meaningfully with the simulation and inputs is basically unfeasible. Having shaders on the same die as the CPU and sharing a memory space should help with those issues.

Yes, I think so too.
 
Panajev2001a said:
you have a LS, you have some cache, you have tons of registers, and a sea of vector ALU's... hey, we are back to VLIW just like with the VU's, sort of ;).
The reasons SPUs and VUs work is because they are more then just Vector ALUs - or compute shaders. Once you dumb it down to that level you might as well just use the main GPU through one of the compute APIs and get the same level of Usability without paying for extra hardware.

And if people worry about GPU-CPU interaction not being feasibly fast, they should worry about it for this too because it's fundamentally an issue of abstraction layers, not hardware performance, and I just don't see getting raw hw access to this - not if the rumours have any truth to them.
 
Skyrim with an official high resolution textures pack (which aren't that high resolution really) consumes about 1,7 GBs of VRAM in 1080p with MSAA 8x. And that's before any uGrids tweaks.
Crysis 2 with an official hi-res textures pack tend to float around 2 GBs of VRAM.

Skyrim at 1440p, 8xAA and high res textures uses only 1.2GB vram for me.
 
I would seriously stop buying games for anything higher than $10 if this anti-used system is true.

It won't change my purchasing habits. But I'm already very picky about what I buy. I don't blame them for wanting to shut out Gamestop.


This all AMD platform has intriguing possibilities. The discrete GPU sounds underwhelming but it could be decent considering a highly optimized design and software.
 
Having an APU in the mix actually solves some problems that would have otherwise existed in relying on GPGPU. The reason stuff like Physx doesn't actually impact gameplay is because getting that information from the GPU back to the CPU so it can interact more meaningfully with the simulation and inputs is basically unfeasible. Having shaders on the same die as the CPU and sharing a memory space should help with those issues.

Even with a dedicated GPU, the CPU and GPU will be sharing a common memory pool. AMD's APUs have very little on chip communication afaik, they don't even have a common on chip memory pool unlike Intel's solutions.
 
It won't change my purchasing habits. But I'm already very picky about what I buy. I don't blame them for wanting to shut out Gamestop.
For me it hasn't much to do with buying used games, I don't usually buy used games. I just want to borrow or lend games to friends and family. But of course, when the value of the game you buy is zero, I can't see the $60 being worth it.
 
The reasons SPUs and VUs work is because they are more then just Vector ALUs - or compute shaders. Once you dumb it down to that level you might as well just use the main GPU through one of the compute APIs and get the same level of Usability without paying for extra hardware.

I'd rather let the main GPU concentrate on rendering and post-processing effects. The APU can exploit, should exlpoit, a level of integration that the main GPU does not have access to.
CUDA and OpenCL have evolved greatly over what they offered back when G80 launched (well, only CUDA back then) and even at that time you could do some pretty nice things on the GPU alone.
I do think there would be a lower level API, but I'd think many people would find a good use on doing work with compute shaders and a hopefully unified address space between CPU and on-die GPU being able to transparently share data between them. A lot of API's like Bullet have been adding OpenCL/Compute Shaders support.

And if people worry about GPU-CPU interaction not being feasibly fast, they should worry about it for this too because it's fundamentally an issue of abstraction layers, not hardware performance, and I just don't see getting raw hw access to this - not if the rumours have any truth to them.

I think that SCE-AMD have the time and resources (time being the most important factor here) to provide a way to exploit the close ties between CPU cores and on-die GPU and the much lower latency in data communications between the two since they share a common cache. Whether you have only a very low level access to this or SCE exploits the underlying HW and driver capabilities to provide a higher level API which is faster than it would have been if the CPU and GPU were two common separate compute nodes.

Also, PS4 might enjoy a lower power mode when it renders movies, music, pictures, or data to be streamed to a lower resolution device such as PS Vita or PSP.
 
Even with a dedicated GPU, the CPU and GPU will be sharing a common memory pool. AMD's APUs have very little on chip communication afaik, they don't even have a common on chip memory pool unlike Intel's solutions.

I don't know if that is necessarily a given. All these "dual GPU" rumors have cast a great deal of doubt on UMA in my mind. And greater inter-operability has always been AMD's goal with Fusion. It remains to be seen just how far down that line these console designs will be.
 
If the talk of hiding everything behind increasingly high-levels of abstraction comes to anything I am really not clear where custom on-die GPU would fit. As it is I am more expecting this to be the most boring generation ever hw-wise.
Boring in terms of raw performance maybe, but it's starting to sound like both have some interesting designs going on that are not necessarily targeting graphics. And with that in mind, I expect this will be the most exciting generation in terms of features and services. Obviously that doesn't cater to everyone though.






Also, PS4 might enjoy a lower power mode when it renders movies, music, pictures, or data to be streamed to a lower resolution device such as PS Vita or PSP.
Yep.

It's funny. When I first read the headline that 720 would have a dual GPU configuration, my initial thought was it would be a design like what is being proposed here. I figured APU + Discrete GPU, where a lower powered mode would offer media playback, XBL games, etc. Basically turn off the discrete GPU and a few CPU cores.

Once I read the actual details of the rumor though (the leak expressly states that the dual GPU's do not work in any sort of SLI/XFire mode), I had to reevaluate how that would make sense. I then remembered a rumor from a while back that 'Loop' may actually be an ARM SoC. Running with that, I posited that 720 may have dedicated 'gaming' HW plus the ARM SoC. That would allow it to function as Loop, which I suspect is meant to be an always-on device like a Roku or cable-box, in a very low power mode. When 'real' games are played, the other HW turns on. Obviously this seems a bit convoluted, but if ARM made the most sense for Loop ... using the same HW in 720 as well is probably the better way to go then attempting to emulate or port everything. Plus, there's the possibility the Loop HW could be used in conjunction with the main HW. Obviously not for graphics, but it could be used for Kinect processing (since loop supports it), multi-tasking of media and social services, etc. Those things could be offloaded so the main HW is freed for mostly gaming-only duties. The OS resource footprint would be nice and small.



A day or two passes ... and then along comes this thread, stating PS4 may also have dual GPU's

And now my initial architecture thoughts have come full circle. The PS4 being proposed here is essentially how I envisioned 720 before reading the details for its rumor. In many ways it sounds like a means to a similar end ... a way to have a lower powered mode for services ... only since Sony isn't releasing something like Loop, they can keep the design a bit simpler.
 
It would be woefully ironic if after all the talk of WiiU being low-spec the "next-gen" consoles end up with the same amount of memory...
Isnt' there a rumor that WiiU has only 1GB?

As it is I am more expecting this to be the most boring generation ever hw-wise.
It does seem like it. Actually more like under-specced and boring.

Running with that, I posited that 720 may have dedicated 'gaming' HW plus the ARM SoC. That would allow it to function as Loop, which I suspect is meant to be an always-on device like a Roku or cable-box, in a very low power mode. When 'real' games are played, the other HW turns on. Obviously this seems a bit convoluted, but if ARM made the most sense for Loop ... using the same HW in 720 as well is probably the better way to go then attempting to emulate or port everything. Plus, there's the possibility the Loop HW could be used in conjunction with the main HW.
Unless the 'Main' hardware is dual GPU setup only, and ARM SoC is all there is on the CPU side, for both the "Loop" and "Main" mode. It makes more sense than you'd think considering that there was a rumor that they'd have dedicated ARM cores for separate stuff like physics, AI, video decoding etc.

At least all of that makes X720 hardware sound somewhat interesting, even if it ends up being underwhelming in terms of power.
 
It's not just anti "Used Games'' though, is it? It's not really about that, it's about locking a game to one console. So all manner of renting games, borrow games from friends and family goes right out of the window all in the name of punishing those nefarious people who are too poor to buy games on launch date. It effects everyone, even those who may well purchase games on release date.
 
Locking a game to one console is anti used games. I laugh hard at all the developers/publishers who think this will lead to more sales. Maybe for call of duty and a few other games it will, most other games that involve any sort of unknown will just become too risky. If I can't resell or return my copy of Ninja Gaiden 3 do you think I would ever even think about buying a Ninja Gaiden 4 or any Team Ninja game for that matter? Will there be a way to hire games with this new system? A demo alone is never dependable when it comes to dropping the kind of money a new game costs.
 
Lord Error said:
Isnt' there a rumor that WiiU has only 1GB?
There are many rumors about WiU :P

Raistlin said:
Boring in terms of raw performance maybe, but it's starting to sound like both have some interesting designs going on that are not necessarily targeting graphics.
I was talking from dev-perspective, where none of the hw rumors so far sound particularly interesting, and we already knew for 5 years we'll be getting less hw-access than ever anyway.

From consumer standpoint it's another story - but that's a discussion for another thread, personally I feel all consoles are facing an uphill battle this time around.

a way to have a lower powered mode for services
Why would that need specialized hw at all? Modern chips can scale voltages, mhz, and power down entire pieces of the machine at will. Unless you're talking about WiiU style physically separate hw for "services".

Panajev2001a said:
or SCE exploits the underlying HW and driver capabilities
Yea I'm not holding my breath for the latter, unless we're talking about exploiting drivers to choke the hw.
 
More likely flood, but I guess is just no jokes allowed. Got it.

The point on MSAA is useless. Besides Vita and WiiU, most devs will use refined versions of FXAA or similar, specially because of deferred engines and performance gain. Even Vita and WiiU will eventually abandon MSAA to use those resources in other areas.

MSAA's era is gone.

Anyway, we don't know what he was comparing to. I guess 720pMSAA.
 
I was about to subscribe to PSM magazine for three months or something like that when I saw this
Out and about? Don't forget to check out latest issue of @PSM3_Magazine: Max Payne 3, 'next MGS exposed' plus Sony's big PS4 gamble

But when I saw that for European region shortest subscription is 1 year, I said screw it.

Does anyone have the magazine? On front it says "Sony's big U-turn on PS4 exposed".
 
I was about to subscribe to PSM magazine for three months or something like that when I saw this
Out and about? Don't forget to check out latest issue of @PSM3_Magazine: Max Payne 3, 'next MGS exposed' plus Sony's big PS4 gamble

But when I saw that for European region shortest subscription is 1 year, I said screw it.

Does anyone have the magazine? On front it says "Sony's big U-turn on PS4 exposed".

I hope they mean "Sony's big PS4 gamble" in the sense of going with a beast machine again, this time
relatively
reasonably priced.
 
I hope they mean "Sony's big PS4 gamble" in the sense of going with a beast machine again, this time
relatively
reasonably priced.

Or it could mean Sony gamble to release PS4 much quicker.

I want Apple style way, announce then release asap. It likely won't short as Apple but something like less one year. Wii U is quite nightmare, too far way back announce.
 
I hope they mean "Sony's big PS4 gamble" in the sense of going with a beast machine again, this time
relatively
reasonably priced.

Big U Turn, and big PS4 gamble...doesn't seem like they are going down the same path as a PS3 or PS2.

It seems like rumors about Sony going more of the shelf might be on the mark at least. In line with what they did with the Vita.

Depending on Vita sales vs 3DS I wager also that they might be more price sensitive than before.
 
Top Bottom