• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Why do Playstation 1 polys jitter when the camera pans?

HTupolev

Member
I'm no technical expert by any means, but if it had no z-buffering, how does depth work in a PSX game? Sorting polygons 'manually'?
Yes.

That was very common all through the mid 90's, actually. Software renderers simply didn't have access to the raw throughput to get away with z-buffering, so they used sorting instead. Z-buffering was actually perceived as a bit of a brute-force approach.

This was all fine and dandy in simply cases like the early portal-based FPS engines, but as games increased in complexity, the complexity of the sorting got worse.

Eventually z-buffers won out due to their algorithmic simplicity on more complex scenes, and ability to be hardware-accelerated.
 

efyu_lemonardo

May I have a cookie?
The lack of z-buffer and perspective correction have nothing to do with the wobbliness. As mentioned before, the wobbly polygons was caused by insufficient precision combined with all calculations being done using integers instead of floating point values.

The PSOne has a co-processor called the Graphics Transform Engine, which is meant to perform many 3D operations like matrix-vector multiplication, lighting and fog. It was fixed-point[*] (aka: integer) based and couldn't work with floating point numbers. The X, Y and Z coordinates of vectors are stored as 16-bit values: 1-bit for sign, 3-bits for integral part and 12-bits for the decimal part. It only supported 3x3 rotation matrices with translation being a separate operation using vectors with 32-bit XYZ integer components (no fractional parts).

This seems to be one reason behind the wobbliness: while drawing a scene, the objects's polygons must be transformed from their local space into world space (at the very least one rotate and one translate command) and then from world into camera space (another rotate and translate). Since translation is integer-only (no fractional), it will "snap" around, the severity depending on the scale of the the scene elements. Also, if you have multiple rotations and translations stacking on top of each other, the precision errors will build up quickly.

Finally, the GTE will output the result of perspective transform as integer pixel coordinates (no fractional), which are to be fed into the GPU for drawing. This means it cannot display any subpixel movement: the polygon will snap in place until one of it's vertices moves enough to snap into a different pixel.

The DS, for example, was also integer-based but had subpixel movement since the final 2D output coordinates had fractional parts (the DS has edge antialiasing which wouldn't work otherwise). While there was some degree of wobbliness due to fixed point math, it was much less because the matrices on the DS had 32-bit components (1-bit for sign, 19-bits for integral, 12-bits for fractional) and were 4x4, so translation and perspective transformation could be done in the same operation. Translations also had 12-bit fractional, so they accumulated less errors.

You can find a complete technical description of the workings of the PSOne here http://www.raphnet.net/electronique/psx_adaptor/Playstation.txt and for the DS here http://problemkaputt.de/gbatek.htm#ds3dvideo

By the way, the PC version of FF8 still has wobbly polygons, probably because their 3D engine had a bunch of pre-calculated fixed point data optimized for the PSOne hardware that couldn't be easily converted. FF7 PC, however, is silky smooth, which means they could get away with simply replacing the data types on the code.


Great post, and thanks for the links!
 

DJ_Lae

Member
The lack of z-buffer and perspective correction have nothing to do with the wobbliness. As mentioned before, the wobbly polygons was caused by insufficient precision combined with all calculations being done using integers instead of floating point values.

The PSOne has a co-processor called the Graphics Transform Engine, which is meant to perform many 3D operations like matrix-vector multiplication, lighting and fog. It was fixed-point[*] (aka: integer) based and couldn't work with floating point numbers. The X, Y and Z coordinates of vectors are stored as 16-bit values: 1-bit for sign, 3-bits for integral part and 12-bits for the decimal part. It only supported 3x3 rotation matrices with translation being a separate operation using vectors with 32-bit XYZ integer components (no fractional parts).

This seems to be one reason behind the wobbliness: while drawing a scene, the objects's polygons must be transformed from their local space into world space (at the very least one rotate and one translate command) and then from world into camera space (another rotate and translate). Since translation is integer-only (no fractional), it will "snap" around, the severity depending on the scale of the the scene elements. Also, if you have multiple rotations and translations stacking on top of each other, the precision errors will build up quickly.

Finally, the GTE will output the result of perspective transform as integer pixel coordinates (no fractional), which are to be fed into the GPU for drawing. This means it cannot display any subpixel movement: the polygon will snap in place until one of it's vertices moves enough to snap into a different pixel.

The DS, for example, was also integer-based but had subpixel movement since the final 2D output coordinates had fractional parts (the DS has edge antialiasing which wouldn't work otherwise). While there was some degree of wobbliness due to fixed point math, it was much less because the matrices on the DS had 32-bit components (1-bit for sign, 19-bits for integral, 12-bits for fractional) and were 4x4, so translation and perspective transformation could be done in the same operation. Translations also had 12-bit fractional, so they accumulated less errors.

You can find a complete technical description of the workings of the PSOne here http://www.raphnet.net/electronique/psx_adaptor/Playstation.txt and for the DS here http://problemkaputt.de/gbatek.htm#ds3dvideo

By the way, the PC version of FF8 still has wobbly polygons, probably because their 3D engine had a bunch of pre-calculated fixed point data optimized for the PSOne hardware that couldn't be easily converted. FF7 PC, however, is silky smooth, which means they could get away with simply replacing the data types on the code.

Very informative! I'm definitely going to read that tech document on the PS1 later.
 

p0rl

Member
The lack of z-buffer and perspective correction have nothing to do with the wobbliness.

I guess it depends what the OP meant- I assumed they were referring to the "melting walls" phenomenon but reading it again I think you're probably right.

From a quality / accuracy perspective there is an awful lot wrong with a PS1 image, that's for sure.
 

KainXVIII

Member
That's why you need try this plugin for Epsxe

cKE8oBX.jpg
 
This would be like the PSX equivalent to sprite edge smoothing on SNES emulation.

Embrace the chunky pixels for SNES and embrace the jittery polys for PSX.

No, it's not equivalent. "Sprite Edge Smoothing" or basically any post-process filters don't add new information to the image (they may make the image look closer to what it looks like on a CRT TV though). Fixing polygon jitter and texture warping on the other hand objectively enhances precision and image quality.
I can see why some may still not like it because of nostalgia, but be assured that these artifacts were not a visual feature intended by developers and not related to the artistic vision.
 

djtiesto

is beloved, despite what anyone might say
probably has to do with low precision when keeping the position of points in 3D space, so what would probably need to be at (22.1372942,-1.6243222) becomes something like (22.13,-1.62), so they kinda jump around in 3D space instead instead of going to an more exact spot.

This is what I've always thought - like the floating point calculations weren't entirely accurate.
 

Fafalada

Fafracer forever
lightchris said:
The PS2 doesn't suffer from any of the artifacts mentioned in this thread so far.
PS2 generation of hw had certain restrictions on UV mapping-precision that could in cases of very large polygons result in texture-jittering. Nothing to do with anything in this thread though.
In case of FFX though - people probably noticed jittery-polygon animations, which were a result of animation-compression used in that game - nothing to do with hardware.

M3d10n said:
The DS, for example, was also integer-based but had subpixel movement since the final 2D output coordinates had fractional parts (the DS has edge antialiasing which wouldn't work otherwise).
From what I remember edge-AA on DS was a bit of a hack that wouldn't necessarily need fractions to work, but I need to look up the docs again, it's been awhile.
But yea IIRC DS rasterizer had subpixel precision, as I don't recall noticing jittering beyond what is there just because screen-resolution is so low.

dark10x said:
That's where you're wrong!
That basically implies emulator has to know the exact data-set on which 3d-transform happen - and work around the code to pass Z-data forward. I'd guess it would require game specific patches for most of the library, which isn't really emulation anymore (you're effectively modifying games directly at that point).
On the plus side - that's just a step removed from fixing screens-space jitter - passing fractional information would be the same point in the pipeline and you have to hijack the transform anyway, so you might as well do it at arbitrary precision.
 

Widge

Member
In case of FFX though - people probably noticed jittery-polygon animations, which were a result of animation-compression used in that game - nothing to do with hardware.

Thanks. Because I noted it in Vagrant Story and then FFX, I put it down to a Square issue.
 

M3d10n

Member
That's where you're wrong!

I don't have the link handy but there is someone that has been working on this very issue with one of the PSX emulators out there. He posted a number of examples of perspective correct texture mapping recently. It doesn't seem to be available yet but there is certainly work being done to solve the issue. Really hoping it sees the light of day as it could really improve the look of those games.

How does that even work? The GTE doesn't communicate with the GPU, it just receives XYZ vertices and outputs XY integer coordinates, which are sent to the GPU by the game code.

I suppose someone could create an emulator where the GTE outputs XY integer coordinates to the game code but stores XYZ fractional coordinates somewhere outside the PSOne RAM and then fetches those from the emulated GPU using the integer XY coordinates as indices. In theory, it could work, but who knows what kind of issues would arise? It's worth a try anyway.

I didn't know about that plugin for improving accuracy. I suppose it works by having the GTE use floating point internally, but doubt it would eliminate jitter because the output would still need to be rounded to fixed point. I'll check it later.

From what I remember edge-AA on DS was a bit of a hack that wouldn't necessarily need fractions to work, but I need to look up the docs again, it's been awhile.
But yea IIRC DS rasterizer had subpixel precision, as I don't recall noticing jittering beyond what is there just because screen-resolution is so low.
Most of the jittering on the DS came from the textures (the UV coordinates had only 4-bit for the fractional, which would cause precision loss when interpolating UVs during rasterization). When you use untextured polygons (or mostly flat colors) everything moves very smoothly and it looks super clean due to the AA and 18-bit color buffer.
 

JNT

Member
That's where you're wrong!

I don't have the link handy but there is someone that has been working on this very issue with one of the PSX emulators out there. He posted a number of examples of perspective correct texture mapping recently. It doesn't seem to be available yet but there is certainly work being done to solve the issue. Really hoping it sees the light of day as it could really improve the look of those games.

I've been led to believe this was not possible due to parts of the graphics pipeline never requesting Z. Could you find the link?
 

efyu_lemonardo

May I have a cookie?
Er, no, you could factor in the orientation in your calculation. This is an engine, getting relevant information is the priority.

Sorry for not replying sooner. After reading more about the GTE and GPU, including seeing the actual number of cycles each transform operation would take, I think I understand your comments better. It seems that at least on paper the GTE could perform over a million triangle transform ops per second, which is at least 15k-30k transforms per frame, depending on frame rate. According to data on scene geometry found on other sites, that was more than enough to transform every triangle in view in a scene multiple times. And assuming the GPU could update the ordering table fast enough, which seems likely, it was possible to generate a perspective correct draw order for all polygons in view in a scene each frame. (This is all based on the assumption that cop2 operated at the same clock as the CPU, which is something I haven't been able to verify yet).

In that case I guess the eight sort lists per object would be used for jump cuts (immediate shifts from one perspective to another) such as moving between a cabin and third person view in a racing game, or switching between views during combat in a JRPG, etc.
But when the user had full control over the camera and could shift continuously between perspectives, the polygon coordinates and the draw order for views in between were calculated on the fly by the GTE and updated by the GPU.

I'm rather new to this stuff, and it's not always easy to know who you can trust in these kinds of threads, so I apologize if my questions were a bit overbearing.

Thanks for your patience!
 
I'm with you on this one: https://www.youtube.com/watch?v=tQMMyUGKBag

The PC version of Metal Gear Solid Integral actually did look amazing with that software z-buffer. There was still some polygon sorting issues though and juddering, but watching the video here, it doesn't seem bad. Did this port even have hardware acceleration? It looks pretty nice.

Relevant:

http://www.linkedin.com/in/malkia

Software Engineer
Digital Dialect

December 1999 – December 2000 (1 year 1 month)

With three other programmers we ported Konami's Metal Gear Solid (PSX) and the VR missions to PC. The project took 9 months, shipped in September 2000.

I was in charge of overall porting the game to run on the PC, writing GTE emulator and other software to emulate the PSX, most interresting from which was adding sub-pixel accuracy that is normally missing on the PSX hardware, this resulted in a better quality of the game vs. the PSX version - e.g. less jittering.
 

iidesuyo

Member
I rmember talking with some guy in the late 90's who was a huge Sega fan.

He told me that Virtua Fighter 2 couldn't be ported 1:1 to Playstation, because the Saturn used special modes that made things possible like the zooming parallax backgrounds (Tekken had static backgrounds), or the floor that came without the "jittering" described in the OP. How much truth is there to it?

It's weird how I remember stuff like that...
 

Tain

Member
And when it comes to the Metal Gear Solid PC port, wasn't it missing some framebuffer effects in Hardware mode? I remember things like the VR mission environment lines not having those light trails.
 

jett

D-Member
That basically implies emulator has to know the exact data-set on which 3d-transform happen - and work around the code to pass Z-data forward. I'd guess it would require game specific patches for most of the library, which isn't really emulation anymore (you're effectively modifying games directly at that point).
On the plus side - that's just a step removed from fixing screens-space jitter - passing fractional information would be the same point in the pipeline and you have to hijack the transform anyway, so you might as well do it at arbitrary precision.

I found some information about the hack:

http://ngemu.com/threads/peteopengl2tweak-tweaker-for-peteopengl2-plugin-w-gte-accuracy-hack.160319/

Tomb Raider 3 without the hack

Tomb Raider 3 with the GTE accuracy hack enabled

I tried it quickly and it seems to work pretty decently. It's not perfect but it's a neat improvement.
 
I think it was due to the lack of floating point (precision) calculation. Not due to the lack of a Z-buffer. Because the reason the vertices move about isn't due to depth, but due to lack of precision.


But I'm not PlayStation hardware guru.

Came to post this. This would be my technical assessment as well. The Z-Buffer is a hardware component that manages depth sorting, not vertex positioning. If the Z-buffer were being flakey, you would see polys drawing in front/behind eachother sporadically, not polys shifting position.
 

jett

D-Member
Will try that out later.

I tested it with R4 just now, the polygonal integrity makes a huge difference when running it natively in 1080p. Polygons don't wobble anymore as if you were tripping balls. It does nothing to help the textures getting all messed as usual, but the polygons are pretty stable.


Without hack

With hack

Look at the bridge at the far end of the image, it's a total mess without the GTE hack. It's not perfect, but it's progress. I would still choose to play in its original resolution because the texture wobbling is still there and is greatly amplified by running at a high resolution. There are some glitches too.

It's really such a shame the PS1 released in such a state, both of these things prevents emulation from greatly improving the graphics the way it does for N64 games. Nobody, ever, would want to run N64 games in their original res.
 
Relevant:

http://www.linkedin.com/in/malkia

Software Engineer
Digital Dialect

December 1999 – December 2000 (1 year 1 month)

With three other programmers we ported Konami's Metal Gear Solid (PSX) and the VR missions to PC. The project took 9 months, shipped in September 2000.

I was in charge of overall porting the game to run on the PC, writing GTE emulator and other software to emulate the PSX, most interresting from which was adding sub-pixel accuracy that is normally missing on the PSX hardware, this resulted in a better quality of the game vs. the PSX version - e.g. less jittering.


That's pretty interested, shame I didn't know about this version back then. Back in 2001 I swear I remember trying to emulate the pS1 version on my PC using Bleem and getting some terrible results. If I knew this one existed back then, I would have bought it instantly.

I did have the MGS2:S PC port though, and ever though it did have some issues at the time, it actually wasn't bad. It was a port of the Xbox version, though I remember it needing some fairly high PC requirements for its time, and it required a gamepad, because KB + M controls were trash.
 

M3d10n

Member
I rmember talking with some guy in the late 90's who was a huge Sega fan.

He told me that Virtua Fighter 2 couldn't be ported 1:1 to Playstation, because the Saturn used special modes that made things possible like the zooming parallax backgrounds (Tekken had static backgrounds), or the floor that came without the "jittering" described in the OP. How much truth is there to it?

It's weird how I remember stuff like that...

The Saturn's 2nd graphics chip, the VDP2, was a 2D scanline tilemap renderer. It could display a couple 2D layers of bitmaps/tilemaps, could setup blending between these layers and could also perform rotation/scaling. Since its scanline-driven, the rotation, scaling and other values could be changed at each scanline, creating effects like the famous SNES Mode7.

However, the VDP2 was quite more capable than the SNES: it could rotate the bitmap/tile layer using a perspective 3D matrix instead of a 2D one, so it could create easily use a 2D layer to make infinite tile-based 3D planes. Since the 3D transformation was performed for every pixel as it's scanned out to the TV, the result was more similar to ray-casting and was completely jitter-free and had perfect perspective correction.

Several Saturn games relied heavily on this because it gave a high-quality plane that could stretch infinitely using zero polygons. It's the secret behind VF2, DOA and Last Bronx: the arenas' floors (and ceilings, in Last Bronx) are just a 2D layer, so all polygons can be spent on the characters.

Here's a video showing a bunch of Saturn games that used the VDP2 "mode7" to complement 3D models. Even at low resolution, it's possible to notice how the "mode7" planes are jitter free and have very accurate subpixel texturing.

Since the VDP2 was scanline-driven, it could also be used to perform scanline-based distortion effects like the water in the level 4 of Panzer Dragoon Zwei and in Grandia. In Megaman X4, for example, the Saturn version contains a heat distortion effect in some backgrounds that is missing on the PSOne.

I found some information about the hack:

http://ngemu.com/threads/peteopengl2tweak-tweaker-for-peteopengl2-plugin-w-gte-accuracy-hack.160319/

Tomb Raider 3 without the hack

Tomb Raider 3 with the GTE accuracy hack enabled

I tried it quickly and it seems to work pretty decently. It's not perfect but it's a neat improvement.

This is the accuracy hack. It does not add perspective correction (look at the door texture, it still distorts).

Wait. Oh shit! I think I understand how it works!

The GTE outputs two 16-bit values for the 2D X and Y coordinates of each vertex, to be passed by the game to the GPU. Those values specify the coordinates in the framebuffer the vertex is to be placed at. However, the largest number a 16-bit variable can store is several times greater than the maximum dimensions of the PS1 framebuffer, meaning a lot of bits are wasted. As long as the game doesn't use the output value for anything other than rendering, the emulated GTE could output 16-bit fixed point values and the emulated GPU could be modified to take those instead and boom: subpixel accuracy.
 
The Saturn's 2nd graphics chip, the VDP2, was a 2D scanline tilemap renderer. It could display a couple 2D layers of bitmaps/tilemaps, could setup blending between these layers and could also perform rotation/scaling. Since its scanline-driven, the rotation, scaling and other values could be changed at each scanline, creating effects like the famous SNES Mode7.

However, the VDP2 was quite more capable than the SNES: it could rotate the bitmap/tile layer using a perspective 3D matrix instead of a 2D one, so it could create easily use a 2D layer to make infinite tile-based 3D planes. Since the 3D transformation was performed for every pixel as it's scanned out to the TV, the result was more similar to ray-casting and was completely jitter-free and had perfect perspective correction.

Several Saturn games relied heavily on this because it gave a high-quality plane that could stretch infinitely using zero polygons. It's the secret behind VF2, DOA and Last Bronx: the arenas' floors (and ceilings, in Last Bronx) are just a 2D layer, so all polygons can be spent on the characters.

The original Virtua Fighter on the Saturn didn't use a 2D plane though, you can see heavy amounts of pop-in here: https://www.youtube.com/watch?v=vl895zXjfyQ&feature=player_detailpage#t=114

Virtua Fighter Remix looks like it may have though: https://www.youtube.com/watch?feature=player_detailpage&v=IYdLhLRKr-o#t=136
 

wonderone

Neo Member
Wanted to post here earlier, but had to wait for my account to be verified :p

The commenter about the GTE accuracy is absolutely correct.

Coming from the demoscene (most guys from this scene went on to join or form game companies or work at AMD/Nvidea later in life) i can say it was common practice in the early 90s to used what they call fixed point math (basically like the egyptians did back in the day, they multiply a fixed number first, then do your (matrix) calculations and then divide the result by the same fixed number), so i wasn't surprised to see the first generation of 3D consoles use this principle as well.

A couple of years ago i tried hacking into the PCSXR sourcecode and try to intercept things in the emulator core of the GTE instead of using some kind of shader later in the process with some succes. In theory it should also be possible to get some proper perspective correction working here by store the z-coordinates the original hardware discards and keeping track of your own z-buffer
 

slapnuts

Junior Member
Thanks. Though I still prefer the look of PSone games over the N64's. It might have had more jaggies, but it also had more detail, as opposed to the muddy and shitty-looking textures of the N64.

Not all N64 games looked muddy and had shitty looking textures...some N64 games like Mario 64 and WaveRace simply blew me away...along with a slew of other games on that system. It came down to developer talent that made the difference.

I didnt mind the PS1 shifting polys with some games compared to others, again...it came down to developer talent that made the difference. I mean there were ways of minimizing it just like there were ways of minimizing that ugly fog some N64 games sported.
 

efyu_lemonardo

May I have a cookie?
Wanted to post here earlier, but had to wait for my account to be verified :p

That all sounds super interesting. I'm rather new to programming but have a strong background in math, and having read large parts of the documentation on the psx in this thread I get the impression it was rather simple to understand, on a conceptual level at least.

I have no idea how other 3D hardware works and like I said I'm new to this, but something about the approach taken by the hardware with regards to geometry processing seems to make sense to me.

This thread has made me curious to learn more.
 

CTLance

Member
Not all N64 games looked muddy and had shitty looking textures...some N64 games like Mario 64 and WaveRace simply blew me away...along with a slew of other games on that system. It came down to developer talent that made the difference.
There was also the idiotic issue of the GPU microcode. In vague terms, the GPU was an extremely programmable processor, up to and including its own instruction set. To get the thing to do anything, you uploaded a microcode blob into the chip that told the chip what to do with which commands.

The default SGI microcode that Nintendo gave to all devs was pretty damn inefficient for gaming purposes, since it valued precision over performance. There was a Nintendo version of the microcode blob, but they actually discouraged devs from using it.

Nintendo only gave a very tiny subset of trusted devs the ability to write their own microcode. Most of them had to beg on their hands and knees for this kind of low level access, and the documentation and tools they received in return appear to have been extremely lackluster.

So, if you ever wonder why some N64 games look so much better than others, then the microcode might have played a huge role. The Star Wars games from Factor 5 are one such example.

(wiki)

Edit: What I'm trying to say here is that actual programming skill of 3rd party devs is not neccessarily reflected in the final product due to reasons outside of their control.
 

wonderone

Neo Member
That all sounds super interesting. I'm rather new to programming but have a strong background in math, and having read large parts of the documentation on the psx in this thread I get the impression it was rather simple to understand, on a conceptual level at least.

I have no idea how other 3D hardware works and like I said I'm new to this, but something about the approach taken by the hardware with regards to geometry processing seems to make sense to me.

This thread has made me curious to learn more.

Glad to contribute... 3D hardware these days works a bit differently, but the math behind it never changes. These days its more about writing wicked shaders. Having a strong knowledge of the low-level hardware can be a huge advantage in terms of setting up your data,etc. and can mean the difference between 30 and 60 FPS as demonstrated by a recent AAA title some friends of mine worked on.
 

pottuvoi

Banned
No Z-buffering.
there we go
Oh, yea. That would explain it.
Lack of Z-buffer!
Wrong. (as stated few times above, I guess this is kind of PSX urban legend.)
Did the N64 support z-buffer ? Is that the reason why that console's graphics output was "cleaner" ?
Z-buffer only helped on how polygons didn't pop in front of eachother in seemingly random fashion.

Biggest reasons were.
Bi and tri linear filtering of textures. (with correct method to select mipmap level.)
Edge-antialiasing
Sub-pixel accuracy/correction when drawing image.
Perspective corrected rendering of textures/polygons.
High enough precision for camera vectors. (location and target/direction)

I tested it with R4 just now, the polygonal integrity makes a huge difference when running it natively in 1080p. Polygons don't wobble anymore as if you were tripping balls. It does nothing to help the textures getting all messed as usual, but the polygons are pretty stable.
When emulating ps1 games in higher resolution the lack of sub-pixel accuracy actually turns into a 'lack of sub-320x240 grid' accuracy. ;)
 
Top Bottom