• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Sony on the future direction of PS3 and NGP, Watch Impress editor speculates on PS4

Argyle

Member
jeff_rigby said:
The design of the SPEs - GPU to work seamlessly together did not support, at launch, an efficient OpenGL cross platform library of calls. The PS3 required low level incompatible with other platform GPU calls. This slowed development on the PS3.

Once developers got up to speed with development engines a benefit of the PS3 CPU and GPU design was as patsu said, MLAA. Another was deferred shader rendering (implemented in a PS3 game engine in 2008) which decreases memory usage and increases available bandwidth. Tile Based Deferred Rendering is used in the NGP GPU for the same reason. With a more expensive GPU available in 2005 that supported OpenGL, the advanced features like MLAA and deferred shader rendering would not have been possible, couldn't have been added with PS3 firmware changes or game engine development. (didn't research this so I'm guessing here).

More memory in the PS3, faster blu-ray drives, larger Hard Disk, wireless N, 10X gigabit Lan port can be added to a PS3.5. But I see too many issues in replacing the GPU. (Putting the cell and RSX on the same die and improving how the combination accesses memory maybe. 20nm is going to change some design thinking. I expect a PS3.5, if coming, will be released with/when 20nm is available.)

Adding a couple of SPEs to the PS3 or enabling the 8th yes. It would still be 100% compatible and developers can provide for higher resolutions or frame rates for 3.5 models. Sony can have an upgraded XMB for the 3.5 models.

I would guess that OpenGL is now supported in the PS3. The first version of OpenGL for the PS3 was according to reports buggy and slow. That has probably been corrected.

OpenGL is needed to support a webkit port and most likely PS Suite ports to the PS3. A PS4 will have to support OpenGL efficiently.

I'm not sure I understand what your point is.

OpenGL is not something that is "supported" by the GPU, it's really supported by the driver or OpenGL libraries for the GPU.

A programmer would write their code using the OpenGL API and from there it's up to the library to translate that into low-level GPU instructions.

I am pretty sure that the PS3 has always had an OpenGL-like API available to developers, but to be honest I do not think very many people use it anymore, instead I think most people use the low-level library for performance reasons (at least on big-budget AAA games, I could see OpenGL being more useful for ports of indie PC games to say, PSN).

I do not think a more modern GPU would have prevented people from implementing things like MLAA, I think it just would have meant that the PS3 would have a more modern GPU. I think that if some kind of PS3.5 were to come out, I would not be surprised to see an overclocked RSX or perhaps one with more shader units available or more memory or memory bandwidth or similar...as long as it was still microcode compatible with the existing RSX.

Zaptruder said:
You've fully bought into the bullshit of the audio industry.

Also, you fail at reading comprehension.

Although I wouldn't argue that we need higher sampling rates, etc., there is some mathematical truth to what he says (IIRC as you approach the Nyquist frequency, I don't think you can accurately represent amplitude anymore)...whether you can perceive that is a whole 'nother can of worms.
 
Argyle said:
I'm not sure I understand what your point is.

OpenGL is not something that is "supported" by the GPU, it's really supported by the driver or OpenGL libraries for the GPU.

With thanks to Argyle for correcting by ASSumptions I have edited the following:

Think about it, OpenGL is a library of GPU calls, I.E. the GPU must support the OpenGL calls. The OpenGL library is a standard to allow cross platform support for Graphics.

A programmer would write their code using the OpenGL API and from there it's up to the library to translate that into low-level GPU instructions.
Exactly

I am pretty sure that the PS3 has always had an OpenGL-like API available to developers, but to be honest I do not think very many people use it anymore, instead I think most people use the low-level library for performance reasons (at least on big-budget AAA games, I could see OpenGL being more useful for ports of indie PC games to say, PSN).
And yes again except as I understand the Sony provided OpenGL library was slow and buggy.

I do not think a more modern GPU would have prevented people from implementing things like MLAA, I think it just would have meant that the PS3 would have a more modern GPU.
And MLAA and Deferred rendering would have been a part of a more modern GPU not part of Game engine low level code.

Edit: partially correct but very misleading. PC, PS3 and NGP use low level library of "C" programmable graphics and shader calls. This feature is in desktop PC GPUs, the PS3 with the SPEs and the vary latest handheld GPUs. This allows the game engine or developer to support deferred shader rendering by writing his own low level shader code. Google is including this support in Android 3.0 and I think 2.3.

I think that if some kind of PS3.5 were to come out, I would not be surprised to see an overclocked RSX or perhaps one with more shader units available or more memory or memory bandwidth or similar...as long as it was still microcode compatible with the existing RSX.
yes

I edited Patsu's comment to explain what I was amplifying. Starting with Patsu:

Originally Posted by patsu:

(with editing) The ability for general purpose SPEs and GPU to work seamlessly together acting like an internally firmware programmable GPU would not be possible with other GPU packages. It's pretty forward thinking. See MLAA and deferred shader.



Two years after the release of the PS3 someone came up with MLAA and deferred shader rendering. Because the SPE-RSX could act like an internally programmable GPU at least for some functions involving the SPE, these features were added/included in a game engine that used low level calls and at GDC 2011 there are now two engines touting those features.

A more modern GPU might now include MLAA and some form of deferred rendering and also support OpenGL 2.0 efficiently but 2005 GPUs didn't. I didn't research if one did but today several GPUs designed for portables do TBDR. It's a process vs. brute force approach with process being more efficient as far as batteries and now with memory and bandwidth bottlenecks more are adopting TBDR.


OpenGL 2.0 is needed for the webkit port and ports to the PS3 via PS Suite. This supports the reasons behind a rewrite of the OpenGL library. OpenGL 2.0 is a library of low level routines for a programmable GPU. Edit: PSGL to be used which is a library of "C" low level programmable graphics and shader routines. OPENGL changed to PSGL in the following

Newer PS3 games may have Sony suggesting developers use PSGL calls to insure portability between PS3 and NGP. Support for this last thought is from a developer comment; "PS3 and NGP calls just work" (with the NGP at full resolution = 1/2 the PS3). With PSGL being used on the PS3, remote play on the NGP is easily possible as is remote desktop if the XMB is rewritten to use PSGL calls.
 

Argyle

Member
jeff_rigby said:
I edited Patsu's comment to explain what I was amplifying. Starting with Patsu:

Originally Posted by patsu:

(with editing) The ability for general purpose SPEs and GPU to work seamlessly together acting like an internally firmware programmable GPU would not be possible with other GPU packages. It's pretty forward thinking. See MLAA and deferred shader.


The original PS3 2006 OpenGL library of calls was provided by Nvidia and had to be modified by Sony because of their SPE - RSX GPU combination. Since low level would be more efficient Sony did not bother to do a good job of it (at least that's my guess).

Two years after the release of the PS3 someone came up with MLAA and deferred shader rendering. Because the SPE-RSX could act like an internally programmable GPU at least for some functions involving the SPE, these features were added/included in a game engine that used low level calls and at GDC 2011 there are now two engines touting those features.

A more modern GPU might now include MLAA and some form of deferred rendering and also support OpenGL efficiently but 2005 GPUs didn't. I didn't research if one did but today several GPUs designed for portables do TBDR. It's a process vs. brute force approach with process being more efficient as far as batteries and now with memory and bandwidth bottlenecks more are adopting TBDR.

Your explanation of OpenGL matches my understanding. Differed shader rendering and MLAA may now be a transparent part of the PS3 OpenGL library of calls. THE PS3 SPE-RSX as far as programmers are concerned may NOW support OpenGL as efficiently as a more modern GPU which internally supports deferred rendering to increase performance.

I guess I kinda see what you are saying, but I don't think it really works that way. For one, I don't think modern PC GPUs "internally support deferred rendering" - maybe the confusion here is in the tile-based deferred rendering hardware in the PowerVR based GPUs vs. the deferred shading/lighting used in many modern games, despite the similarity in the names, I don't think those algorithms are the same.

Modern PC GPUs have added more functionality to make it easier to write a renderer using deferred shading/lighting, but this functionality is pretty low level.

I don't think there was really a plan for the SPEs and the RSX to work "seamlessly" - where I will define "seamlessly" as "transparent to the programmer." (as opposed to "graphics pipeline is implemented to use both resources", certainly you can argue that any final implementation is "seamless" because it works...) You always have to plan on what your SPEs would do vs. what you wanted the RSX to do and architect your rendering pipeline accordingly.

I also do not believe that there are special PS3 specific OpenGL extensions to enable MLAA/deferred rendering (and if there were, I'm not sure I would understand what they would be - I would think that those techniques are higher level than the aim of OpenGL).

That said, I'm pretty sure there's nothing stopping you from implementing deferred rendering or MLAA even if you are using Sony's version of OpenGL. (To be fair, one thing I am not is an OpenGL expert, so maybe I have been looking at lower level stuff too long :)

It's basically analogous to an algorithm vs. a programming language. Generally speaking, a programming language doesn't have to have an algorithm "built-in" for it to be possible for you to implement it in that language. For example, what is deferred shading/lighting? My understanding of it is that instead of writing color values into your intermediate buffers, you write other data to the buffers like normals and what not, then you build the final color buffer by running a pixel shader that implements your lighting, reads the values from all of your intermediate buffers as textures, and writes the color to the final framebuffer. That said, what's stopping you from implementing this in OpenGL? Just allocate the buffers as needed and write your shaders to implement this algorithm.

Similarly, MLAA is a post process, once you have the near final framebuffer, you basically throw your SPEs at your framebuffer and let them process the image. Is there really any difference between rendering that near final framebuffer with a low-level library vs. OpenGL? Why implement that as some kind of extension to OpenGL when you can just write a generic implementation that says "allocate a framebuffer, point this library at it and it will kick off a bunch of SPE tasks to antialias your framebuffer"?

Additionally, I don't think communication between the RSX and the SPEs would ever be "seamless" because of various constraints as to where your data has to live if you want the SPEs to do work on them...with the current PS3 architecture, you definitely have to do some planning on how your rendering pipeline works if you want to use the SPEs.

Edit: sorry, you're editing as I reply :) This:

jeff_rigby said:
Think about it, OpenGL is a library of GPU calls, I.E. the GPU must support the OpenGL calls. The OpenGL library is a standard to allow cross platform support for Graphics.

...is not correct.

Let me put it this way - the GPU, at its core, implements an instruction set that tells it what to do. What the OpenGL library does is generate a buffer full of GPU instructions, and then the GPU goes off and runs it. It's the library that "supports" the OpenGL calls, the calls into the OpenGL library call the lower level GPU instructions for you. Some of the overhead is that the CPU has to do a bit more work to translate things into a format that the GPU understands.

The OpenGL API is generic and cross platform, cross GPU. Maybe you can have hardware that makes this translation closer to 1:1, but generally IMHO you're better off using those transistors for other things (like more shader processing units, etc.)

If you have a lower level library, what you basically have is an API that more closely mimics the actual GPU microcode vs. the generic OpenGL API. Does that make sense? It's late here so I am sorry if this is confusing or hard to follow :)

Edit #2, and I am going to bed after this!

OpenGL is needed for the webkit port and ports to the PS3 via PS Suite. This supports the reasons behind a rewrite of the OpenGL library.

Newer PS3 games may have Sony suggesting developers use OpenGL calls to insure portability between PS3 and NGP. Support for this last thought is from a developer comment; "PS3 and NGP calls just work" (with the NGP at full resolution = 1/2 the PS3). With OpenGL being used on the PS3, remote play on the NGP is easily possible as is remote desktop if the XMB is rewritten to use OpenGL calls.

I'm not sure I follow this - what in Webkit requires OpenGL? Is it the implementation of WebGL? (You could just choose not to support it, if so, and if you did want to support it, I would think that Sony's OpenGL implementation is good enough. Even though I do not believe many modern games use it these days, it has definitely shipped a bunch of PS3 titles...)

I don't know much about NGP programming, but when I read those comments, I assumed they meant "we brought our assets, shaders, and rendering pipeline over from the PS3 and they just worked" (which makes sense - doesn't every GPU provider have some sort of shader compiler that takes HLSL/Cg shaders and outputs native GPU microcode?)

I don't understand what you mean about Remote Play, the current implementation encodes the output as video and streams it over the network, what would be different under OpenGL? You could maybe send the stream of OpenGL calls over the network instead but both sides would need similar data loaded to render the image...and I have a gut feeling that stream of calls would take up more bandwidth than a compressed video stream...just a hunch, maybe I am totally wrong on that :)
 
jeff_rigby said:
The reasons for using 4K in movie production are many and some apply to the consumer market. For a source platform, I.E. Blu-ray player to have a 4K standard and able to output video at any resolution up to 4K is the best of all worlds. Film has a resolution somewhere between 2K and 4K. The current TV standards we use compress color depth while 4K does not. This does create a visible difference even on smaller screens.

News I've read have Japan and China with broadcast 4K by 2015. Wikipedia is predicting 4K blu-ray players in 2012. 4K is currently required in polarized 1080P 3-D and the 3-D glassless head tracking 3-D TVs (would have something similar to Kinect in them).

Well said. 4k will happen over the course of the next console life cycle. Might as well be prepared to support it. Toshiba has a 4k consumer tv ready for production. Sony will follow suit shortly.

Sony has had a lot of success launching a new disc format with each console. First CDs, then DVDs, then Blu Rays winning the format war thanks to the PS3. 1tb HVDs are on the cusp of coming to market, and they can be read by traditional blu ray lasers so the readers won't be expensive to build. Sony is a key investor in the technology. It's coming, whether you want it or not.


jeff_rigby said:
Look at the current limitation of the blu-ray player in the PS3:

Slow, too slow at 2X for 4K and too small for a full 4K res movie. What would Holographic storage give us; a much FASTER drive with more storage. I take it you wouldn't object to faster load times for games. That would come with 4K support for media.

http://www.crunchgear.com/2009/09/30/holographic-storage-rears-its-head-again-blu-ray-compatible-500gb-discs/

a 120MB/sec transfer rate = as fast as a hard disk

500 Gigabyte size

readable on a slightly modified blu-ray drive

We should bear in mind that GE is suggesting that consumer drives using its technology wouldn't appear until 2014 or 2015, though, suggesting that (read/write) drive cost will be a problem in the early years.

Reading is via a single blu-ray laser and a cheap slightly modified blu-ray player can read the new disks but it takes 2 lasers to write so R/W drives will be expensive.


We can view something less than 4K and greater than the current 1080P on screens of 50 inch or so. We don't have to have a 4K display but depending on the size of the screen some resolution less than 4k should be viewable. I.E. 4K is the upper end and as such should be the standard for the Source.

Something similar happens with blu-ray; it's a 1080P source because some not all TVs can benefit from the extra resolution.

The objections about 4K are not thought out. The same applies to rendering games. Some games on the PS3 can render at 1080P and as such limiting the resolution output from the PS3 because some can't does not make sense. A game on a 4K PS4 does not have to render at 4K, most will render at 2K or some slightly higher resolution that might match what will be supported on most next generation TVs.

I'll close with; why is there a HDMI 1.4 4K standard if it's not going to be used? During the 10 year life of a PS4, 4K will become a standard, all new TVs will accept video from a 4K source and display at the resolution supported by the TV. In some cases that might be 1080P.

EucadianProphecy said:
In my honest opinion, Nintendo definitely shouldn't be in too much of a rush to put out a new console in 2012, or even maybe 2013 (heck I think they should all wait till 2014/15). At the moment Nintendo in particular are in a very precarious position and I don't believe that they themselves will want to do any thing rash.

You have a company that to all intents and purposes abandoned the hardcore & core gamer demographic to firmly establish for themselves an installed base primarily made up of the non-gamer and non-traditional casual gamer demographics. Even with their own first party support they, for the most part, alienated the core gamer and thus resulted in third party support drying up for their console.

How then does a company like that persuade their current non-core audience to upgrade to a WiiHD. They could target the hardcore with a new console, alienating their current installed base, which would leave them two years (if they launched in 2012) of clean sailing until their competitors released their invariably far more powerful consoles in 2014. But then they even have the problem of trying to persuade a demographic (the hardcore), which they alienated this gen with the Wii, to come back over from MS and Sony who've not only done a great job in servicing the needs and wants of the core gamer with games they want to play, but have also provided them with solid, robust online infrastructures that those said gamers have invested a tonne in. Nintendo will have a very very difficult job trying to draw them away, especially given that these being the most informed demographic generally will know that Sony and MS' next boxes will be releasing soon.

Then Nintendo also has the problem of figuring out a way of getting the more non-traditional
casual gaming audience which they currently have captured with the Wii to upgrade to the next console. These are a demographic who were swayed to the Wii by motion controls and Nintendo IPs, that likely care not about HD graphics in their games, and even the ones that do would have been offered a solid upgrade path to Kinect or PS Move. How does Nintendo get them to all go out en masse and buy a Wii2 without some new gimmick or technology?
What else is there beyond motion gaming, full body tracking, sense and emotion sensing, HD and crap like smell-o-vision that Ninty can use as the core marketing premise of their next gen console.

Virtual Reality. It's the only route that hasn't been explored well yet.

Nintendo has been interested in virtual reality since the virtuaboy. And they tend to sink billions of R & D into making their desires into reality.
 
Argyle said:

Edit: Sony is going to use the Cairo SVG upper level graphics library rather than low level OpenGL or PSGL. They can custom support Cairo with PSGL on the PS3 and it will work exactly the same on the NGP or any other platform.

Vector graphics can be used instead of "artwork". The XMB can be supported without requiring video of the screen being transfered. HTML5 can be used as UI in subsections of XMB. All these can be supported with very small overhead.

25 years ago we used to do this on a Atari ST using GEM. The idea behind the Internet and HTML/XML is the same.

http://scalibq.wordpress.com/2010/05/15/sony%E2%80%99s-playstation-3%E2%80%99s-main-graphics-api-is-not-opengl/

The problem is, the PlayStation 3 doesn’t USE OpenGL, it uses Sony’s own PSGL (which might be based on OpenGL ES 1.0, but has a LOT of hardware-specific extensions, since OpenGL ES 1.0 doesn’t even offer programmable shaders or anything… Ironically enough PSGL is based on Cg, which was developed by nVidia and Microsoft, and is closely related to Direct3D’s HLSL, but NOT OpenGL’s GLSL). And in fact, many games don’t even use PSGL all that much, but go down to the bare metal (the advantage of a console: hardware is fixed).

Yes, there are OpenGL wrappers available for the PlayStation 3, but their performance is considerably worse than PSGL, because OpenGL isn’t suited to the PS3’s hardware very well
 
The Abominable Snowman said:
Maybe the PS3.5 will just be a PS3 with all the removed features back in (Million USB ports, Blu Ray Burner, Linux, dual HDMI and whatnot)

Yeah, and the EE/GS chip. And USB 3.0 or Light Peak. And a much smaller, fanless design.
 

StuBurns

Banned
They said they were going to include a bluray burner once? I can't even imagine what the application for that would have been.
 
StuBurns said:
They said they were going to include a bluray burner once? I can't even imagine what the application for that would have been.

DVR?

BD recorders are very popular in Japan to record OTA HD.
 

StevieP

Banned
AdventureRacing said:
Im not expecting this, if anything it feels like it is being phased out. As someone who loves local mulitplayer/co-op and doesn't like online i hate this direction.

Get a Wii.

Maybe the PS3.5 will just be a PS3 with all the removed features back in (Million USB ports, PS2, Blu Ray Burner, Linux, dual HDMI and whatnot)

Judging by Sony's recent track record, I would highly suggest you not expect any of it back.
 

Argyle

Member
jeff_rigby said:
Another assumption was OpenGL would be a VERY important part of the PS3 and Sony's ecosystem which would make it worth the effort to make it efficient. If this is not the case then I see problems with the NGP interfacing with the PS3...big kludge.

OpenGL is needed for WebGL. Vector graphics can be used instead of "artwork". The XMB can be supported without requiring video of the screen being transfered. HTML5 can be used as UI in subsections of XMB. All these can be supported with very small overhead.

If the PS3 Webkit implementation did not support WebGL...I am not a web developer, but I have a feeling that the reaction would be, as we say on NeoGAF "...and not a single fuck was given." Does anyone really use WebGL extensively yet? Even so, why do you need to have a full OpenGL implementation underneath it to use it? Just reimplement the code that handles WebGL (that presumably calls into normal OpenGL) as low level graphics calls instead, sure it's more work, but it's still probably less work than getting all of OpenGL to work (and will probably run faster, too).

I have to admit that I am unclear what the motivation for moving XMB into HTML5 would be. Keep in mind that XMB must render very quickly (probably no more than a millisecond or two) to keep from bogging things down too much when it is turned on, as it is rendered after the game signals that it is done rendering...not sure why you would want to move that to HTML5 just so that you can have an efficient remote play that only works if nothing is being played.

jeff_rigby said:
25 years ago we used to do this on a Atari ST using GEM. The idea behind the Internet and HTML/XML is the same.

Your web browser doesn't have to maintain an interactive framerate, though. I imagine it wouldn't be so pretty if the server had to resend the page 30 times a second.

Your idea might be workable if you can compress the data enough but I dunno...I'm skeptical since I'm not sure that I've seen anything working like that over a typical consumer internet connection.

jeff_rigby said:

Pretty much. I think PSGL is there to make porting easier, but like I said earlier, I think you'll find that games using it are definitely in the minority these days.

Funny that he would complain about the lack of GLSL support...I don't know if any commercial game programmers use GLSL. DirectX and HLSL have pretty much won in the game development community (and Cg is basically the same thing as HLSL).
 
There was the exact same rumor with the 360 a few years back. It was speculated they would shrink the cpu/gpu, add ram, increase resolution and iq in games blah blah blah, turned out to be total bullshit. No way will they tweak the hardware, it would be pointless and it's not going to happen.

As for "next gen", I have a theory of what's going to happen. I think Nintendo is going to announce a Wii 2 or Wii HD, and since they aren't interested in losing money on hardware, I expect it to be... let's put it like this, it would be to the 360 what the Wii was to the Gamecube... two 360's ducttaped together, essentially. What would be interesting would be that Nintendo would in fact have the most powerful console on the market, and have the best versions of 3rd party HD games going forward. Personally I think that would be pretty bad-ass and I'd be on board.

Again, just a stupid theory on my part. What I'd really like to see is Nintendo catch MS and Sony with their pants down and release a loss-leading, fully powered amazing true Next gen box that blows the two out of the water, but that is less likely.
 
Argyle said:
If the PS3 Webkit implementation did not support WebGL...I am not a web developer, but I have a feeling that the reaction would be, as we say on NeoGAF "...and not a single fuck was given." Does anyone really use WebGL extensively yet? Even so, why do you need to have a full OpenGL implementation underneath it to use it? Just reimplement the code that handles WebGL (that presumably calls into normal OpenGL) as low level graphics calls instead, sure it's more work, but it's still probably less work than getting all of OpenGL to work (and will probably run faster, too).

We were told that we would have the ability to view 3-D from inside a browser. This requires HTML5 and WebGL. WebGL supports games.

From reading, which can get me in trouble again, PSGL = OpenGL for everything except shader functions; this due to Shader features being supported by SPEs not the RSX. This may mean that it's a non issue for most WebGL features.

I read a powerpoint about this issued by Sony in 2005 and the reasoning behind Sony choosing to implement a custom PSGL and Nvidia Cg based shader library was because OpenGL ES 2.0 (Shader support) was not implemented yet. Edit: another reason might be more flexibility and the ability to use low level shader code to support things like MLAA and deferred shader rendering. Shader code for the PS3 = SPE pipeline, just felt I might be misunderstood.

I have to admit that I am unclear what the motivation for moving XMB into HTML5 would be. Keep in mind that XMB must render very quickly (probably no more than a millisecond or two) to keep from bogging things down too much when it is turned on,

Not moving XMB to HTML5 but enabling HTML5 UI for subsections of the XMB. I would think that the move to the Jive forum which is known to support java in the server and javascript in platforms would support cloud applications and customizable features in the XMB.

Admittedly remote game play does have the issues with assets you mention. Remote control/desktop does not have issues that I can see.

One of the rumored features of some games was purchasing a PS3 game and NGP game is included. With the same assets in both platforms your issues would be satisfied.

Your idea might be workable if you can compress the data enough but I dunno...I'm skeptical since I'm not sure that I've seen anything working like that over a typical consumer internet connection.
You have seen it, it's called PC remote desktop.

http://tablets-planet.com/2011/02/28/new-apple-ipad-application-allows-remote-desktop-control-for-pc-or-mac/

It's available for iPad (1/4 the performance of a NGP) to connect to PCs and Mac desktop. As you can see the XMB is much simpler than a PC desktop. Games are another issue but your description of the OpenGL process should work with games too.

Apple-iPad-GoToMyPC-demo.jpg


Apple-iPad-GoToMyPC-screenshot.jpg


And there is also Go To My PC which allows remote desktop in a web browser. https://www.gotomypc.com/tr/gcon/2010_Q4/Demo_Contextual/160x600/g25_petofficelp?Target=mm/g25_petofficelp.tmpl

Pretty much. I think PSGL is there to make porting easier, but like I said earlier, I think you'll find that games using it are definitely in the minority these days.

The reason the statement that PS Suite would be used to support porting applications to the PS3 and quickly retracted was thought to be a typical Sony "don't tell them anything till it's released" retraction when it might indicate too many issues porting to the PS3 because OpenGL is not supported. Edit: Cairo and it's a non-issue. First reason is probably correct.
 

Argyle

Member
jeff_rigby said:
We were told that we would have the ability to view 3-D from inside a browser. This requires HTML5 and WebGL. WebGL supports games.

From reading, which can get me in trouble again, PSGL = OpenGL for everything except shader functions; this due to Shader features being supported by SPEs not the RSX. This may mean that it's a non issue for most WebGL features.

Where were we told that we would be able to view 3D in the web browser? I must have missed that.

Also, I do not think that reason is correct, PSGL pretty much just runs on the RSX. I do not believe it uses the SPEs at all.

jeff_rigby said:
Not moving XMB to HTML5 but enabling HTML5 UI for subsections of the XMB. I would think that the move to the Jive forum which is known to support java in the server and javascript in platforms would support cloud applications and customizable features in the XMB.

I must be dense because I guess I am not seeing the benefit here...oh well, agree to disagree I guess :) Seems overkill to me.

jeff_rigby said:
Admittedly remote game play does have the issues with assets you mention. Remote control/desktop does not have issues that I can see.

One of the rumored features of some games was purchasing a PS3 game and NGP game is included. With the same assets in both platforms your issues would be satisfied.

You have seen it, it's called PC remote desktop.

http://tablets-planet.com/2011/02/28/new-apple-ipad-application-allows-remote-desktop-control-for-pc-or-mac/

It's available for iPad (1/4 the performance of a NGP) to connect to PCs and Mac desktop. As you can see the XMB is much simpler than a PC desktop. Games are another issue but your description of the OpenGL process should work with games too.

Apple-iPad-GoToMyPC-demo.jpg


Apple-iPad-GoToMyPC-screenshot.jpg


And there is also Go To My PC which allows remote desktop in a web browser. https://www.gotomypc.com/tr/gcon/2010_Q4/Demo_Contextual/160x600/g25_petofficelp?Target=mm/g25_petofficelp.tmpl

This is not the same thing as what we were describing, which I understood to be sending the raw draw calls over the network for the client to render with minimal loss. Have you used Remote Desktop? It seems to use simple dirty rectangles with some bitmap compression from what I can tell. Have you ever tried to watch a YouTube video, much less run a game in it? I can guarantee you will not be happy with the performance, if the game will even run under that environment.

Like I said, I haven't seen a working implementation, doesn't mean it's impossible, but I'm sure smarter folks than myself have considered it and I suspect there is a reason why no one uses it, most likely because this would use more bandwidth than you think.

Also, I took "NGP version to be included" to mean...well, that an NGP version would be included, and that it was ported to NGP and would basically be the same game, on the NGP. IMHO as long as the PS3 version wasn't using the SPEs heavily for things that influence gameplay (say gameplay physics) the NGP might be able to keep up running the same game.

jeff_rigby said:
The reason the statement that PS Suite would be used to support porting applications to the PS3 and quickly retracted was thought to be a typical Sony "don't tell them anything till it's released" retraction when it might indicate too many issues porting to the PS3 because OpenGL is not supported.

I hope my guess is correct and Sony did finally implement OpenGL ES 2.0. The reasoning in the powerpoint indicated 2.0 was not a finished standard so Sony went with a custom PSGL that equaled OpenGL ES 1.0 and the Nvidia Cg shader library that was complete at the time. The 2.0 standard is finished and with cross platform being a big attractions and the Webkit browser I think there is enough need to support the effort.

The webkit javascript engine in the PS3 is a JIT. I was told that was impossible on the PS3 by some and doable with much effort by Sony from others. Sony did it and Netflix did it also with a custom QT Webkit port to the PS3. We may be overestimating the difficulty or underestimating the forces driving this development.

Without knowing anything about PS Suite (as far as what it contains)...I was under the impression that it was basically an environment you would write your game for that would run under Android as well as NGP (maybe virtualized on NGP?) and maybe PS3 (you say it was announced and then unannounced?)...

PS3 seems like it would be more challenging, I suspect PS Suite games can run native code, which would be ARM code for most Android devices/NGP, right? So you'd have to have an ARM interpreter/emulator that runs well enough on PS3 to be equivalent to a decent phone, in addition to everything else... My guess is that the announcement/unannouncement is that they plan to get it going on PS3, but there's significant technical risk and so they don't want to promise it in case they fail to deliver. But that's all speculation, so who knows...
 
Argyle said:
Where were we told that we would be able to view 3D in the web browser? I must have missed that.
Yes a Sony employee stated we would be able to view 3-D inside a browser within a year (a year ago). No mention of a new browser in the statement.

Also, I do not think that reason is correct, PSGL pretty much just runs on the RSX. I do not believe it uses the SPEs at all.
The RSX does have shader ability(you are probably correct) but later generation engines off-load shader and perform deferred shading and MSAA on SPEs.

RE: HTML UI in subsections of the XMB, I must be dense because I guess I am not seeing the benefit here...oh well, agree to disagree I guess :) Seems overkill to me.
Webkit will be in the NGP, PS3 and all coming Android SONY handhelds. The choice to use HTML5 for SOME User Interfaces and Cloud computing as in a better email editor or widgets seems obvious to me.

This is not the same thing as what we were describing, which I understood to be sending the raw draw calls over the network for the client to render with minimal loss. Have you used Remote Desktop? It seems to use simple dirty rectangles with some bitmap compression from what I can tell. Have you ever tried to watch a YouTube video, much less run a game in it? I can guarantee you will not be happy with the performance, if the game will even run under that environment.
Sorry, my fault OpenGL in the discussion would be misleading. And it's probably both what we were describing and a terminal server model which runs higher level. As long as similar resources are in each end; PC remote Server and PC remote client it's not video its similar to HTML in that for the most part descriptions of objects on screen are sent not bitmaps of objects. Perhaps we should not start at OpenGL but understand that a version of Cairo (PS3 webkit port) requires OpenGL. The primitives would be from the Cairo library.

I did mention "25 years ago we used to do this on a Atari ST using GEM. The idea behind the Internet and HTML/XML is the same." GEM is a library of graphics primitives that was in all Atari ST computers (Motorola 68000 with 256K of memory). Send higher level GEM and vector graphics can be sent with a few bytes instead of 10K for a bitmap.

PC remote desktop can run on dialup connections. Video in a window does require a faster network and a game will suffer some but is still possibly doable.

WebGL games is also misleading. The browser engine does have access to Cairo too. Having OpenGL support generally means accelerated graphics drawing not that information is sent as OpenGL low level calls.

Edit: Sony-Google may be doing some ground breaking and coming up with new standards. In researching Android - shaders it appears as if Google with "GAME" in mind for Android 2.3 (this is in the Android 2.3 intro) is developing programmable low level shader support similar to PC, PS3 and NGP.


Without knowing anything about PS Suite (as far as what it contains)...I was under the impression that it was basically an environment you would write your game for that would run under Android as well as NGP (maybe virtualized on NGP?) and maybe PS3 (you say it was announced and then unannounced?)...

PS3 seems like it would be more challenging, I suspect PS Suite games can run native code, which would be ARM code for most Android devices/NGP, right? So you'd have to have an ARM interpreter/emulator that runs well enough on PS3 to be equivalent to a decent phone,

It is my understanding that it will be native language on all platforms not interpreted. Again because Sony started with Android we assume interpreted, running on an engine.

To EASILY PORT between the NGP and Android 2.3 hardware platforms using the "C" to Android "C" NDK released by Google (I'm assuming that's how Sony will work PS Suite), the OS libraries have to have similar calls. There is no reason for Sony to not use the open source libraries that Android 2.3 uses, making the process of porting NGP PS Suite to Android "C trivial. Many of those same libraries and most likely webkit will be ported to the PS3 also.


http://a2tech.blogspot.com/2011/02/how-html5-is-aiding-in-cross-platform.html

How HTML5 is aiding in cross platform development.

The article talks about how powerful HTML5 is and that applications can be written in HTML5 and are cross platform. It goes on to talk about a product that puts HTML5 inside a wrapper that looks like a native language application and can be sold in the platform store.

Beyond the obvious above which I have been speculating for Sony before the PS Suite announcement (wrapper for HTML5 and ultraviolet DRM) is that for HTML5 webkit support on all these platforms, the OS in these platforms have similar Open Source libraries to support webkit.

So if a platform has Webkit installed it must have the libraries and "C" becomes portable between platforms. This was the extension to the logic I was not able to see until the PS Suite announcement; this is what I believe Sony is doing.

"C" is a language that has been designed to be portable between platforms. This portability requires similar libraries or "C" calls. With Webkit in all these platforms there is a higher likely hood for portability.

This is why iOS and Android as well as Nokia with QT webkit, or any platform relying on Webkit is a potential easy PS Suite target.

There are other basic OS building blocks that platforms have in common; POSIX, Next Step, OpenGL etc. form the building blocks and Webkit defines several standard implementations which are supported with similar OS libraries. Android relying entirely on Open Source libraries (free) might mean that Sony will use those libraries for the NGP and PS3.

I don't think its a coincidence that the Android 2.3 hardware features are almost exactly duplicated in the NGP. This makes it easier to develop cross platform games and applications between NGP and Android.

If you look at what is required for a modern browser.....the PS3 requires OS updates as well as browser updates. Sony was probably waiting to update the PS3 OS until the NGP - PS Suite Open source OS library choices were finalized. That required knowing what would be in Android 2.3. I.E. Sony had to wait till now to update the PS3.

This is all guesswork on my part.

http://www.dcemu.co.uk/vbulletin/threads/352333-SCEA-s-Jack-Tretton-talks-Sony-NGP-announces-PlayStation-Suite-for-PS3

We managed to have a sit-down with the SCEA President-CEO following the company's big Tokyo meeting, a non-working NGP unit in tow. For a brief moment, Tretton "confirmed" that PlayStation Suite games (currently slated for Android devices) would work on PlayStation 3. We asked rather directly, to which he responded, "Yes, they will. Yeah." By the next question, however, he explained that he might've misspoken and wanted to clarify that Suite is only NGP right now.

http://www.psxextreme.com/ps3-news/8500.html

In a roundtable session after last week's PlayStation Meeting, SCE CEO Kaz Hirai said the Suite won't end with Android devices. As reported by Andriasang, Hirai said they "have a completely open stance. With carriers and with hand set makers." The bottom line is that Sony wants to bring their PlayStation brand to as many different people as possible. Added Hirai:

"There are a variety of OSes. But we're focusing first on Android. There's also Windows, iOS and so forth, but we don't have the resources to make it compatible with everything from the start."

As far as Android units go, Sony will target the smartphones first, and up next are the tablets. At this point, it all depends on "user adoption," and they aren't ruling out PSS on platforms like Google TV. Sony will have a review process for Suite software and thanks to this QA, "Sony hopes to establish an ecosystem that runs on Sony certified hardware, differentiating PlayStation Suite from the anything-goes PC world." Yeah, PlayStation is gonna be everywhere.

Opera (Sony TV) and Webkit (LG and Samsung) are in 2011 Broadcom SOC supported TVs. These might be supported by PS Suite.
 
The problem is, the PlayStation 3 doesn’t USE OpenGL, it uses Sony’s own PSGL (which might be based on OpenGL ES 1.0, but has a LOT of hardware-specific extensions, since OpenGL ES 1.0 doesn’t even offer programmable shaders or anything… Ironically enough PSGL is based on Cg, which was developed by nVidia and Microsoft, and is closely related to Direct3D’s HLSL, but NOT OpenGL’s GLSL). And in fact, many games don’t even use PSGL all that much, but go down to the bare metal (the advantage of a console: hardware is fixed).

Yes, there are OpenGL wrappers available for the PlayStation 3, but their performance is considerably worse than PSGL, because OpenGL isn’t suited to the PS3’s hardware very well (just like how you can run linux on it, but the Cell is very much underused by linux applications, and you end up with a very slow system, with the exception of custom PS3 applications).

http://www.engadget.com/photos/sony-ngp-developer-presentation-at-gdc-2011/#3940574

20110302-16595675-ngp-img4831.jpg


So the NGP is going to use the same PSGL and shader model as the PS3. That makes it compatible with PS3 - NGP but less compatible with OpenGl ES 2.0 and Android.
EDIT: http://android-developers.blogspot.com/2011/02/introducing-renderscript.html

Android is evolving to use low level programmable shaders to support more powerful GPUs and games. So Android is evolving and is providing the tools for Sony to support PSGL and shader support with PS Suite.

And most likely Argyle was correct and Sony will use their PSGL and shader library with webkit (Cairo and other libraries would include support for PSGL) rather than OpenGL ES 2.0. Chrome on Android 2.3 and 3.0 might be going the same route to support games. This would make it easier to support cross platform games and WebGL games on PC -PS3 -NGP and Android 2.3, 3.0

TVs support only 2D acceleration, I believe no advanced low level programmable shader functions so CPU no GPU supported Shaders (slow).

20110302-16595675-ngp-img4836.jpg


I read this as libraries are similar so "C" is portable. Will share lots of code, open source libraries will be shared.

Guess is turning into supported speculation.
 
RyanDG said:
Fuck it.. Here we go.

Under the section:
●当面はプランがないPlayStation 4(PS4) '

The very first part of the paragraph simply poses the question - with the announcement and the release of the NGP, should we be suspecting the successor of the PS3 any time soon? It then goes on to quote Kaz who basically reiterates the 10 year console plan, that the PS3 remains attractive to developers, and that Sony would like to continue this support by continuing to push new features out for the PS3. Further, they would like to use the PS3 and NGP as a sort of partnership for potential releases (ie, basically seeing the NGP as a way to further extend the PS3s life like the PSP did for the PS2 - this is me reading a bit into this, but that's what I'm getting out of this comment).

The second paragraph in this 'header' deals with the same stuff we've heard before, rising development costs, finding ways to support the industry, and how the market may not quite be ready for a next generation of consoles quite yet, because of the current market situations.

The next section is dealing with how Sony may be reworking some of the rumored technology coming with the PS4. It actually pretty much shows that Sony is still planning the PS4 and development hasn't been suspended or halted. Nothing here that we haven't seen before.

If someone wants to correct my gist of a translation, go right ahead. Like I said before, I'm a little rusty.

Yes, I think you are closer to the original intent. The Google translation is terrible and the author of the article puts his interpretation in to the piece also. Reading all of it and the Sony clarification because the article was mis-interpreted by others gives something closer to your translation than the original Authors Google translation.

"to use the PS3 and NGP as a sort of partnership for potential releases" is supported in the latest NGP GDC developer lecture which I cite above. I suspect this will also include remote control and play between NGP and PS3 exceeding what the original PSP supported.

My read was both MS and Sony were waiting for cost effective GPUs to be developed before proceeding on a new generation design.

For Sony, a next generation design might also be tied to Next generation CE displays like a new 4K TV standard being available in Japan in 2015. On several fronts Sony has invested in projects that might be used in 4K CE equipment like Holographic blu-ray:

120 mb/sec transfer rates (as fast as a hard disk and MUCH faster than current blu-ray)
500 meg storage size (large size needed for 4K movies)
can be read by a slightly modified blu-ray player (cheap)

Projected consumer release by the front runner in this (GE) is 2014.

So loosely it looks like 2014 would be a target date for a PS4 release.


If Sony has a release date of 2014 - 2015 for the PS4 then a 14nm die process might be used. With a cost effective GPU and 1080P as the game rendering target resolution (Still supports 4K for media), a fan less design is also possible. IF home network servers become more popular (hard disk and holographic drive in the server) I can see two or three PS4 console designs. Rumored PS3.5 with 20nm might also see some of the following.

1) Combine home network server and game console in one case. Playable on any TV in the home via home network. TVs have camera as standard equipment. This also includes tablets and might be playable outside the home also.

2) Network server provides the drives and the PS4 is part of a new Sony 4K TV. Is a module that plugs into the Sony TV or is a stand alone box without drives.

3) Standard Game console which includes drives.

If Sony decides to support games with some resolution above 1080P, the resolution choice will severely impact the design......a fan might be required and power/heat issues might eliminate some of the above possibles.

This decision will probably be impacted by what MS and Nintendo do with a next generation Xbox and Wii. Next generation GPU performance and cost will probably decide how all will proceed and we don't have information on this yet so designs are on hold.
 
New GDC video showing what Unreal engine could do on a next generation platform.

http://www.youtube.com/watch?v=n3XeCHywNYM&feature=player_embedded

The above is Epic Games’ estimation of what Sony, Microsoft and Nintendo’s next game boxes shall be capable of.

What you see above is an evolution of Rasterization i.e fake 3D.

Regular readers of SegaLeaks are aware that Sega’s next console ditches Rasterization in favor of Real Time Ray Tracing.

The above video may have left forum goers across the internet in awe, however that is only so because those same people have never witnessed full speed fully interactive Ray Tracing.

You know the term, It's all smoke and mirrors. Well this demo is all smoke and mirrors; lighting, reflections, smoke-rain, textures etc. Really good demo of realism.

Ray tracing takes more processing power. The demo, at least the video was 720P max. It would be interesting to see the specs on the hardware platform that generated the graphics. That might give us an idea of next generation GPU needs.
 

jsnepo

Member
This thread somehow reminds me of that other thread focusing on the powah of the Cell made by this Mike guy.
 
Here are my hopes for the PS4...

Stephen Colbert said:
I feel like this upcoming console generation may well be the console generation, or if not, atleast the last generation of consoles that we see until past 2020 (when I'll be too old to game).

So I want them to go all out on the technology front. The current gen is going to last close to a decade. This coming gen has to be built to last yet another decade.

sony4_playstation.jpg


This is why I expect and hope that the PS4 will feature...

The GPU better kick massive amounts of ass while also being easy to program. Something like the NVIDIA Maxwell (The combined CPU+GPU that Nvidia is developing with ARM for 2013 that Nvidia claims will be 10X more powerful than their current high end GPUs) should do nicely.

1TB HVD playback (shouldn't be expensive since HVD players are just modified blu ray players). The PS1 launched CDs, the PS2 launched DVDs, the PS3 launched Blu Rays, it only seems appropriate that the PS4 launch HVDs.

Output up to 4K resolutions (games will still be in 1080p, but the PS4 should playback 4K (film resolution) movies off HVD disks). The ppi of a 42 inch 1080p tv is only 62 (the iPhone 4's ppi is 331). The world will be demanding higher ppi screens soon enough. A 4k 42 inch tv (ppi of ~140) should suffice for the next decade.

USB 3.0 (since so many devices use USB)

Thunderbolt ports (since the technology is so cool and has so much potential)

And don't skimp on the RAM either. I'm hopeful that next gen consoles would pack a bare minimum of either 4GB of normal XDR2 RAM, or 2GB of XDR2 RAM that is upgraded with the technology from the Terabyte Bandwidth Initiative (ideal) or if not, then atleast 4-6GB of GDDR5 RAM (still lightyears ahead of DDR, though not quite as good as XDR2 RAM), a large amount of bandwidth on the memory bus, and a smart api that can split computations between the GPU and CPU.

Likewise, pairing it with a small SSD that houses the actual system OS, and can be used to cache game assets would dramatically reduce boot times and even cut down on power consumption. Considering the $229 iPod Touch houses a 32gb SSD, getting something like that in the PS4 in 2012 shouldn't be too difficult. SSDs are tiny. So running the OS, saving games and streaming game assets off of that, while having a bay that can be used for a regular hard drive to store movies and music would be ideal. It would also mean that you don't lose all your game saves everytime you upgrade your hard drive.


The earliest that Sony can deliver such a device at a price point of $399 for the cheap sku and $499 for the premium sku, that should be when the next gen starts.

Nintendo on the other hand I think can and will deliver on Virtual Reality in the upcoming gen. They already did 3D and they already did motion control, it's time to go that next step, and deliver on what they promised way back in the 90s with the Virtua Boy. What better way to gain more recruits from the blue ocean.

Nintendo has been sinking billions upon billions into R&D for the past several years on something. They clearly wanted to make this happen since the Virtua Boy. And knowing Nintendo, once they get an idea into their head, they'll keep coming back to it and trying it until they can make it happen.

As far as MS goes, I really couldn't care less what they do. They ruined Rare, closed Ensemble, lost Bungie, and really don't have ANY first party studios or first party franchies left that I care about.

So I hope we won't see the next gen start for atleast two years.
 

mrklaw

MrArseFace
Arpharmd B said:
There was the exact same rumor with the 360 a few years back. It was speculated they would shrink the cpu/gpu, add ram, increase resolution and iq in games blah blah blah, turned out to be total bullshit. No way will they tweak the hardware, it would be pointless and it's not going to happen.

As for "next gen", I have a theory of what's going to happen. I think Nintendo is going to announce a Wii 2 or Wii HD, and since they aren't interested in losing money on hardware, I expect it to be... let's put it like this, it would be to the 360 what the Wii was to the Gamecube... two 360's ducttaped together, essentially. What would be interesting would be that Nintendo would in fact have the most powerful console on the market, and have the best versions of 3rd party HD games going forward. Personally I think that would be pretty bad-ass and I'd be on board.

Again, just a stupid theory on my part. What I'd really like to see is Nintendo catch MS and Sony with their pants down and release a loss-leading, fully powered amazing true Next gen box that blows the two out of the water, but that is less likely.
I'm trying to work out what part of that would catch MS/Sony with their pants down?

'here is our amazing new home console, it's brand new and plays the same kinds of games you've been playing on PS3 and 360 for 5 years, only a bit better'

Not the most compelling sales pitch ever.
 
The upcoming gen has to last well past 2020. 4K projectors and 4K hdtvs are already going into consumer production this year, they'll be fairly common in a few years. They offer a ppi of 140 where as current 1080p tvs have a ppi of around 60. The iPhone 4 by comparison has a ppi of 331.

More importantly, 4K is the resolution that is used in film. It's what Avatar and other blockbusters are filmed in. They could be transferred directly, without loss of color accuracy, onto HVD disks, to play on 4K tvs and 4K projectors. Hell, they could even be used by movie theaters rather than the massive film they currently use.

HVD isn't really that big an investment for sony. It uses a modified Blu Ray drive and the same laser. Offering up HVD playback would help Sony sell 4K tvs, and license out the tech to others.

Digital distribution of 4K resolutions is not feasible and will not be feasible over the next decade. In fact Sony pushing 4K is their best hope of staving off digital distribution from getting too popular. The videophiles and enthusists will prefer to view a movie in the original 4K resolution of it's theatrical release over viewing netflix at 720p.

I still want games to run at 1080p next gen. They can continue to be released on blu rays. But there's no reason why the PS4 can't also playback 4K movies on HVDs.

In fact, if anything, it helps the PS4. The PS3 having blu ray plackback inevitably led to a lot of techophiles and early adopters picking it up. It also led to Best Buy and others pushing it and bundling it with 1080p tvs.

HVD/4K playback will push early adopters to pick up the PS4 again, and will lead to Best Buy bundling them with 4K tvs.
 
Stephen Colbert said:
Here are my hopes for the PS4...

So I hope we won't see the next gen start for atleast two years.
Are you insane? Some of that seems possible if you're happy to pay 599 or far more for a console again - releasing next year! - but that VR Wii HD thing is a joke right.
Although I could be wrong I read somewhere the steps to even reach VR would be to first deliver good 3D - which you could argue hasn't been achieved yet, then developing holographic technology and only after that could VR be viable. And when you consider 3D hasn't become mainstream yet...
 

dogmaan

Girl got arse pubes.
jsnepo said:
This thread somehow reminds me of that other thread focusing on the powah of the Cell made by this Mike guy.

i was thinking the exact same thing..... how odd.
 

DECK'ARD

The Amiga Brotherhood
4K is ridiculous, and sounds like the next Super Audio CD.

HD adoption was helped by the switchover to digital broadcasting, HD is more than enough for the average consumer. And convenience will trump quality, as had happened with music. On-demand and streaming will win-out.

Also, if Sony have learnt anything from this gen it's that the arms-race solution is not the way to go about things. The move to HD was arguably too soon and hurt the industry, trying to kick-start a 4K bandwagon which would mean having content to show it off in games-terms as well as film would be disastrous.
 

DonMigs85

Member
If we do move to 4K sooner rather than later, I do hope the standard lasts for a good 50 years before the next upgrade. We got along fine with NTSC 480i for so long after all.
Ah but then, they'll likely start shooting movies in 8K or higher soon...
 
Solid_Rain said:
Are you insane? ...

All tests so far have been inconclusive.

Solid_Rain said:
that VR Wii HD thing is a joke right.
Although I could be wrong I read somewhere the steps to even reach VR would be to first deliver good 3D - which you could argue hasn't been achieved yet, then developing holographic technology and only after that could VR be viable. And when you consider 3D hasn't become mainstream yet...

I'm not sure if I would classify Wii HD as Nintendo next generation as opposed to a stopgap intermediate step to their nextgen/their next revolution.

Regardless, I don't know why you're talking about holograms. Go to any Gameworks and you can experience VR in all it's glory. You step into a booth, put on a headset and it feels like you're on a rollercoaster. And these headsets aren't even 3D. Nintendo's version will be. The technology already exists. Just no one else was willing to take that leap and bring it out of the arcades and into the home yet.
 

DECK'ARD

The Amiga Brotherhood
DonMigs85 said:
If we do move to 4K sooner rather than later, I do hope the standard lasts for a good 50 years before the next upgrade. We got along fine with NTSC 480i for so long after all.
Ah but then, they'll likely start shooting movies in 8K or higher soon...

If the motivation behind it is just to sell new TV's and preserve a physical format it will fail. It's the same fear of shifts in the market that doomed the Walkman. Sony's fear of mp3 and downloading gave their entire market away to Apple.

Technology that makes things more convenient, cheaper and easier is a lot more appealing than technology that is there to try and control and restrict.

Also, content providers like broadcasters would rather offer more channels, programming, films-on-demand, etc. than waste huge amounts of bandwidth on 4K for less. HD became an attractive upgrade because of all the HD content out there, which was helped by the digital switchover. 4K will be a much harder sell, both to consumers and broadcasters.
 

DECK'ARD

The Amiga Brotherhood
Stephen Colbert said:
I'm not sure if I would classify Wii HD as Nintendo next generation as opposed to a stopgap intermediate step to their nextgen/their next revolution.

Regardless, I don't know why you're talking about holograms. Go to any Gameworks and you can experience VR in all it's glory. You step into a booth, put on a headset and it feels like you're on a rollercoaster. And these headsets aren't even 3D. Nintendo's version will be. The technology already exists. Just no one else was willing to take that leap and bring it out of the arcades and into the home yet.

The HD part of Nintendo's next console won't be the be-all and end-all of the system, it will have an angle on it. And going from the 3DS, it will have lots of angles.

But none of those angles will be VR. You don't spend a whole generation changing the message of a company to social gaming, breaking down barriers, making games more inclusive to then turn all that on it's head and force people to wear a headset that divorces them from the environment they are in.

Madness, this is like a time-warp back to that Nintendo On video.
 

DonMigs85

Member
DECK'ARD said:
If the motivation behind it is just to sell new TV's and preserve a physical format it will fail. It's the same fear of shifts in the market that doomed the Walkman. Sony's fear of mp3 and downloading gave their entire market away to Apple.

Technology that makes things more convenient, cheaper and easier is a lot more appealing than technology that is there to try and control and restrict.

Also, content providers like broadcasters would rather offer more channels, programming, films-on-demand, etc. than waste huge amounts of bandwidth on 4K for less. HD became an attractive upgrade because of all the HD content out there, which was helped by the digital switchover. 4K will be a much harder sell, both to consumers and broadcasters.
Yep, I'm fairly certain they just wanna sell more TVs. As far as 2D HDTVs go I think the Pioneer Kuro was pretty much perfect, then they introduced 3D for the home and now, until they can come up with decent glasses-free 3D they have nothing new to push on the market.
 

DECK'ARD

The Amiga Brotherhood
DonMigs85 said:
Yep, I'm fairly certain they just wanna sell more TVs. As far as 2D HDTVs go I think the Pioneer Kuro was pretty much perfect, then they introduced 3D for the home and now, until they can come up with decent glasses-free 3D they have nothing new to push on the market.

Yeah, glasses-free 3D is really the only obvious way to go next but will be a long time coming and possibly never be ideal because of the sweet-spots for viewing. 3D with glasses will never go mainstream, which I'm sure all the manufacturers are aware of.

4K seems like the alternative thing for them to push, because it's obviously much easier to implement just without a convincing need for it.

They'd be better off just making the best TV's they can, rather than thinking people now need this big new idea to make them buy one. It didn't use to work that way before, and if they keep trying to push whole new standards year in year out consumers won't know what to go for and will stick will what they know works. Which will be HD.
 
4k and glassfree 3d go hand in hand. It's far easier to do glassfree 1080p content on a 4k tv than the traditional route (making wider pixels). Toshiba already showed off such a tv. Every high end tv an year or two from now will be 4k with glassfree 3d. So the PS4 might as well be designed to support it if it wants to still be a relevant device a decade from now.

DECK'ARD said:
Madness, this is like a time-warp back to that Nintendo On video.

Didn't Iwata (or Miyamato) respond to the Nintendo On video saying he loved it and if the technology was possible, he would love to do it or something. That was almost a decade ago. The tech is finally ready now.
 
A goggle sytem using AR would be cool if they could be done comfortably, seamlessly, and without ruining your eyes.

Just like the 3DS, you put a card down on the table or wall and that becomes your 'screen', except it's fully 3D and can expand in all dimensions into the real world. It wouldn't divorce you from your environment because you'd still be able to see through to your surroundings perfectly fine unless a specific game was designed to be fully immersive. It would be cool to put the goggles on and you've suddenly got a holographic IMAX screen in your room.
 

DECK'ARD

The Amiga Brotherhood
Stephen Colbert said:
4k and glassfree 3d go hand in hand. It's far easier to do glassfree 1080p content on a 4k tv than the traditional route. Toshiba already showed off such a tv.

Only with resolution drop, which negates the 4K. Which isn't going to help with pushing 4K content, and even 3D content is obviously very lacking. You need content to push new technologies, both 3D and 4K will struggle whereas on-demand/streaming/download will flourish over the next few years. And just like with music, convenience and availability will win-out.

Didn't Iwata (or Miyamato) respond to the Nintendo On video saying he loved it and if the technology was possible, he would love to do it or something. That was almost a decade ago. The tech is finally ready now.

Never read any comment from anyone at Nintendo regarding that video, let alone Iwata or Miyamoto. Miyamoto's comment about the living room becoming the game was just out-of-the-box thinking about what could happen in the future, but was all about no barrier between the player and the game. A VR headset is a massive barrier, it won't happen.

It's the very opposite of what Nintendo stand for at the moment.
 
The point is that every high end tv an year or two from now will be 4k with glassfree 3d. So the PS4 might as well be designed to support it if it wants to still be a relevant device a decade from now.

As for Nintendo's VR system, yes actually Nintendo did comment on the now infamous Nintendo Go video. They said something along the lines of, they loved it. And multiple glasses for multiple people (all powered by the same console) are not out of the question. It would be fun boxing you wife in a virtual reality 3d environment. The ads, hype and press releases write themselves.

If you think about where motion gaming and 3d gaming intersect, VR is the next stop on that road.
 

DECK'ARD

The Amiga Brotherhood
Graphics Horse said:
A goggle sytem using AR would be cool if they could be done comfortably, seamlessly, and without ruining your eyes.

Just like the 3DS, you put a card down on the table or wall and that becomes your 'screen', except it's fully 3D and can expand in all dimensions into the real world. It wouldn't divorce you from your environment because you'd still be able to see through to your surroundings perfectly fine unless a specific game was designed to be fully immersive. It would be cool to put the goggles on and you've suddenly got a holographic IMAX screen in your room.

That's (slightly) more Nintendo-like, but the tech-problems with mapping the 3D space means what you'd get in reality wouldn't be worth the effort. AR stuff works because it's simple, on a small (flat) area. It's well suited to the 3DS because of that obviously, the bigger in scope you go the more it's failings would become apparent.

Nintendo just won't go down the road where a player has to wear something to experience it, leaving out the people in the room who aren't. Everything they do now is about sharing the experience, making gaming social, encouraging interactivity and lowering the barrier of entry.

Whatever Nintendo have up their sleeve for Wii 2, it will most likely be more along the lines of 'why did no one do that before' rather than high-tech thinking like this.
 

FoxSpirit

Junior Member
Zaptruder said:
You've fully bought into the bullshit of the audio industry.

Also, you fail at reading comprehension.

First off, the example quoted can only mean the Elac Jet-III tweeter. The linearity increase in that simply means you get much better Distortion values up to the 20kHz mark. And believe me, that thing needed it, the Jet-II tweeter is brutal. The boxes with the Jet-III are a work of art though and if you appreciate time resolution, you'll be looking a loooong way up the tree until you find boxes that perform similarly.

Now, for the 4k debate: I can easily teel I am watching a movie. When I look out of my window, I can see the remarkably detailed picture with superb contrast and subtlety. I can see in a movie how when people are not screen filling, I can not read the faces the way I can read people's faces on the street.
The question simply is, is the more in detail in 4k worth the tradeoff for the immense technical investment all areas would have to make to produce and reproduce it?
For me, that's currently a no. A really good TV already looks really good and sharp, before I buy into this 4k stuff you'll have to really demo me hard and show me some really convincing stuff.

Before that, make 4k displays but don't expect me to pay any kind of price premium for it.
 

DECK'ARD

The Amiga Brotherhood
Stephen Colbert said:
The point is that every high end tv an year or two from now will be 4k with glassfree 3d.

No, they won't.

You will have a split market because of cost, and the sweet-spot issue of glasses-less 3D will be as much of an issue to the consumer as wearing glasses. And may never be got round to the point it's not an issue. Also, content will drive adoption rates and if you don't have the 3D and 4K content you aren't going to see massive adoption or a massive need to support it.

As for Nintendo's VR system, yes actually Nintendo did comment on the now infamous Nintendo Go video. They said something along the lines of, they loved it. And multiple glasses for multiple people (all powered by the same console) are not out of the question. It would be fun boxing you wife in a virtual reality 3d environment. The ads, hype and press releases write themselves.

If you think about where motion gaming and 3d gaming intersect, VR is the next stop on that road.

Well a quick Google pulled up nothing from Nintendo about it, and I don't remember any comments by them about it.

Nintendo's motion controls were about lowering the barrier of entry, performing a NES-style rest on the industry to get people into games who were put off by their complexity and counter-intuitiveness. It wasn't about creating an alien VR world with the massive barrier of needing headset to wear and enjoy.

Speculate away, but I think you are barking up the wrong tree and ignoring the message Nintendo have been giving out for the last 5 years with the DS and the Wii.
 
Well moving away from a discussion of 4K, HVDs, SSDs, XDR Ram, VR, and what not, I don't know about you guys, but to me, the NVIDIA Maxwell sounds absolutely amazing and I hope Sony waits on that before launching the PS4.

The Maxwell is the combined CPU+GPU (GPGPU) that Nvidia is developing with ARM for 2013. Nvidia claims it will be 10X more powerful than their current high end GPUs. So it should blow even the high end tech demos we're seeing now, out of the water. I think it would be cheap enough to stick in a console by 2014.

That would be one way to ensure the PS4 lasts us until 2022 atleast.

So it's counterproductive to look at things like BF3 and get impatient for the next generation of consoles. These games still don't like a true generational leap over something like Killzone 3. Let encourage Sony to wait for the Maxwell and it's competitors to show up when putting together their next console.

The Maxwell will feel like a true generational leap over Killzone 3. And waiting for that will give Sony plenty of time to make some serious cash off the PS3, and also focus their attention on making the NGP a success.

If Microsoft rushes out a console in 2012, before people feel like they're really ready to upgrade, MS might will only get a very short term advantage.

They'll be kicking themselves when Sony shows off a PS4 the subsequent e3 that is based on Maxwell and is almost a generation beyond what Microsoft has to offer.
 

spwolf

Member
DECK'ARD said:
If the motivation behind it is just to sell new TV's and preserve a physical format it will fail. It's the same fear of shifts in the market that doomed the Walkman. Sony's fear of mp3 and downloading gave their entire market away to Apple.

Technology that makes things more convenient, cheaper and easier is a lot more appealing than technology that is there to try and control and restrict.

Also, content providers like broadcasters would rather offer more channels, programming, films-on-demand, etc. than waste huge amounts of bandwidth on 4K for less. HD became an attractive upgrade because of all the HD content out there, which was helped by the digital switchover. 4K will be a much harder sell, both to consumers and broadcasters.

uhm, back in 2006, i bet you posted the same thing about HD :).
Many here sure did.

Within 3 years, majority of TV's sold will have 3D included... technology is already cheap and premium is already as small as $150 for big sets.
 
This is when the next gen starts...

"Nvidia Corp. will integrate general-purpose ARM processing core(s) into a chip that belongs to Maxwell family of graphics processing units (GPUs), the company revealed in an interview. The Maxwell-generation chip will be the first commercial physical implementation of Nvidia's project Denver and will also be the company's first accelerated processing unit (APU).

"The Maxwell generation will be the first end-product using Project Denver. This is a far greater resource investment for us than just licensing a design," said Mike Rayfield, general manager of mobile solutions for Nvidia, in an interview with Hexus web-site... Jensen just told at the press conference that the new chip is 40 times faster than Fermi and that it will get up to 10 to twelve times faster than Kepler.

The manufacturing process of choice is 22nm, something that TSMC hopes to have in 2013 but it still leaves quite a gap in 2012 when there won’t be anything really new, just maybe a tweak in Kepler.

Between now and Maxwell, we will introduce virtual memory, pre-emption, enhance the ability of GPU to autonomously process, so that it's non-blocking of the CPU, not waiting for the CPU, relies less on the transfer overheads that we see today. These will take GPU computing to the next level"

http://www.fudzilla.com/reviews/item/20260-nvidia-maxwell-is-22nm-part
 
DECK'ARD said:
No, they won't.

You will have a split market because of cost, and the sweet-spot issue of glasses-less 3D will be as much of an issue to the consumer as wearing glasses. And may never be got round to the point it's not an issue.
With head tracking a couple of the 4K & 3-D TVs can target sweet spots to up to 4 viewers.

Also, content will drive adoption rates and if you don't have the 3D and 4K content you aren't going to see massive adoption or a massive need to support it.

The same was said about 1080P and blu-ray.

4K is an accepted standard for HDMI 1.4. http://www.dpllabs.com/article/hdmi-rev-14-4k-x-2k-resolution-mission-impossible

Assume for a moment that Holographic blu-ray storage is available and cheap, exactly the same price as current 50 gig blu-ray players. Assume that 10% of the market will have TVs that can resolve something over 2K and 3% can resolve 4K (100+ inch screens). Why not support them?

4K should be the standard for commercial and consumer blu-ray. It's current 24hz limitation with HDMI 1.4 should be addressed with new standards.

http://reduser.net/forum/showthread.php?t=47212

LG, Samsung and Sony have agreed a new AV standard called HDbaseT that will supercede HDMI. Panasonic hasnt signed up yet.

The standard has been created to allow daisy chaining between internet, tv, recorders and audio equipment. (one cable linking each piece of kit)

The standard is aimed at delivering everything electronic on the planet down one cable, including 2 x full 1080 (3d) and according to the specs, 4k!

There is no question TV manufacturers are gunning for 4k sets up to 150 cm in screen size becasue they do not need to significantly change or re-tool the existing substrate production line.

Apparently hardware is about to be included in top end sets currently on the production line and will be adopted on all models in early 2011.

What are RED's thoughts on delivering a 4k monitoring output to HDbaseT?

http://www.tuaw.com/2010/07/19/rumor-apple-to-launch-4k-video-format/

Apple is gearing up to launch a new video format.

Apple's supposed new format would be based on the Dirac codec which was developed by BBC Research. The codec is open source and currently supports 1920x1080 resolutions, but Apple plans to up the format to support 4k video – that's a resolution of up to 4096x2160.

HardMac points out that many people still do not have 1080p HDTVs at home, much less TVs capable of supporting 4k resolution, but they theorize that Apple would be planning to enter the HDTV market from the high end – at the same time introducing not only stunning hardware, but making Blu-ray discs look archaic with their "low" resolution.

Apple - Intel Light Peak USB3 replacement would be suited to this type of application, eliminating HDMI with a new generation rather than something like HDBaseT which would be obsoleted.

Supercompression For Full-HD And 4k-3D (8k) Digital TV Systems
http://www.waset.org/journals/waset/v72/v72-109.pdf

Supercompression is based on super-resolution. That is to say, supercompression is a data compression technique that superposes spatial image compression on top of bit-per-pixel compression to achieve very high compression ratios. If the compression ratio is very high, then they use a convolutive mask inside decoder that restores the edges, eliminating the blur. Finally, both, the encoder and the complete decoder are implemented on General-Purpose computation on Graphics Processing Units (GPGPU) cards. Specifically, the mentioned mask is coded inside texture memory of a GPGPU.
 

Lazy8s

The ghost of Dreamcast past
Deriving a heterogenous processing model from multiple processors like SPEs and RSX is more an innovation of software than hardware. The load balancing of SGX and an API like OpenCL to take advantage of it would be the direction to look toward in terms of hardware.

And being familiar with the context is vital when using an expression like "ahead of its time", especially when that context is the history of processor architectures, including Transputer.
 

thuway

Member
jeff_rigby said:
With head tracking a couple of the 4K & 3-D TVs can target sweet spots to up to 4 viewers.


Assume for a moment that Holographic blu-ray storage is available and cheap, exactly the same price as current 50 gig blu-ray players. Assume that 10% of the market will have TVs that can resolve something over 2K and 3% can resolve 4K (100+ inch screens). Why not support them?

That won't happen for atleast another 10 years. ATLEAST.

You do realize that a majority of consumers fail to see the benefits of Bluray, and even the ones that do, they stick to an inferior standard due to convenience.

No company in their right mind would want to support 4k in the next 5-10 years. I can barely see the difference between 720p and 1080p on video.
 

thuway

Member
Stephen Colbert said:
This is when the next gen starts...

"Nvidia Corp. will integrate general-purpose ARM processing core(s) into a chip that belongs to Maxwell family of graphics processing units (GPUs), the company revealed in an interview. The Maxwell-generation chip will be the first commercial physical implementation of Nvidia's project Denver and will also be the company's first accelerated processing unit (APU).

"The Maxwell generation will be the first end-product using Project Denver. This is a far greater resource investment for us than just licensing a design," said Mike Rayfield, general manager of mobile solutions for Nvidia, in an interview with Hexus web-site... Jensen just told at the press conference that the new chip is 40 times faster than Fermi and that it will get up to 10 to twelve times faster than Kepler.

The manufacturing process of choice is 22nm, something that TSMC hopes to have in 2013 but it still leaves quite a gap in 2012 when there won’t be anything really new, just maybe a tweak in Kepler.

Between now and Maxwell, we will introduce virtual memory, pre-emption, enhance the ability of GPU to autonomously process, so that it's non-blocking of the CPU, not waiting for the CPU, relies less on the transfer overheads that we see today. These will take GPU computing to the next level"

http://www.fudzilla.com/reviews/item/20260-nvidia-maxwell-is-22nm-part


As awesome as this sound, road maps never end up abiding to what they initially plan. Remember the delays with Fermi?
 

Lazy8s

The ghost of Dreamcast past
Next generation starts next year with the A9600.

Xperia Play could actually be a contender if it adopted the Nova for a late-2012 model.
 
Top Bottom