...The thread title is not "which system's games look better on an emulator?". Maybe we should have that thread, though, because that would be a question that does not have an objective, factually correct answer...
That would be an interesting thread! Stuff that might be interesting to consider, from a post from earlier in this thread:
Link
Nintendo was afraid to allow the custom microcode use, because they was afraid to have PSX looking games. Here's the words of the WDC lead programmer: "...there was always a concern during development that Nintendo would bounce the title if they saw PlayStation like visual artifacts"
Some other posters in this thread had mentioned this issue (see
here), but I hadn't seen that statement before, it's very interesting. Thanks for posting it.
After a web search, I read through the B3D thread where Rob Povey (the WDC lead programmer) made that statement. It's very much worth reading, for folks interested in this topic:
Rob Povey said:
https://forum.beyond3d.com/threads/...datory-functions-of-the-hardware-right.53745/
...The main memory [of the N64] had hideous latency, it was organized in 2 banks the idea being that Z would go in one bank and color in the other, in practice that didn't buy you very much,
and with Z on fill was very limited. The CPU's small direct mapped cache couple with the poor memory latency was an absolute performance killer. Which is probably why Nintendo went the way they did with [the GameCube]... Yes depth buffering was a big penalty, as I said above the RDRAM was split into two banks, and you were supposed to put the Frame Buffer in one and the ZBuffer in the other, in practice the difference was only about 5 or 10%.
We didn't use the ZBuffer for World Driver Championship or Stunt Racer 2K, that coupled with faster uCode [microcode] let us do a lot more visually,
there was always a concern during development that Nintendo would bounce the title if they saw playstation like visual artifacts...
https://en.wikipedia.org/wiki/Nintendo_64_programming_characteristics
...It may be that most Nintendo 64 games are actually
fill-rate limited, not geometry limited; thus, there is a variety of possible techniques by which to maximize the fill rate. The RDP's (Reality Drawing Processor) fill rate is significantly affected by the optimizations taken in the microcode used by the developer — often specifically with Z-buffering. Thus,
for maximum performance, the microcode supplied by Nintendo may be replaced by each developer...
We shouldn't forget what Nintendo's own Genyo Takeda said about this:
https://en.wikipedia.org/wiki/Nintendo_64_programming_characteristics
As the Nintendo 64 reached the end of its lifecycle, hardware development chief Genyo Takeda referred to the programming challenges using the word hansei (Japanese: 反省 "reflective regret"). Looking back, Takeda said "When we made Nintendo 64, we thought it was logical that if you want to make advanced games, it becomes technically more difficult. We were wrong.
We now understand it's the cruising speed that matters, not the momentary flash of peak power."
Some have speculated that the lack of sufficient ‘cruising speed' is the reason Sega of Japan rejected the N64 protoype, after reviewing it:
http://www.sega-16.com/2006/07/interview-tom-kalinske/
Tom Kalinske: ...we all knew that there would come a day when the Genesis would no longer have a life, and we'd have to move on to the next technology... When we started the CD-ROM efforts, clearly those were the early days of using optical discs for video games... we had been contacted by Jim Clark, the founder of SGI (Silicon Graphics Inc.)... We were quite impressed, and we called up Japan and told them to send over the hardware team because these guys really had something cool. So the [Sega of Japan] team arrived, and the senior VP of hardware design arrived, and when they reviewed what SGI had developed, they gave no reaction whatsoever. At the end of the meeting, they basically said that it was kind of interesting, but the chip was too big (in manufacturing terms), the throw-off rate would be too high, and they had lots of little technical things that they didn't like: the audio wasn't good enough; the frame rate wasn't quite good enough, as well as some other issues. So, the SGI guys went away and worked on these issues and then called us back up... This time, Nakayama went with them.
They reviewed the work, and there was sort of the same reaction: still not good enough...They had spent all that time and effort on what they thought was the perfect video game chipset, so what were they supposed to do with it? I told them that there were other companies that they should be calling... Needless to say, he did [Clark and Yamauchi apparently met
in 'early 1993'], and that chipset became part of the next generation of Nintendo products (N64)...
This was the speculation of one of the posters at the B3D forum, in the same thread where Rob Povey made the above-mentioned statements (about how he coded World Driver Championship):
[QUOTE='swaaye' at the B3D Forum]
https://forum.beyond3d.com/threads/...datory-functions-of-the-hardware-right.53745/
I suppose they [SGI] were breaking new low-cost ground, but it is [still] something that SGI engineered those serious design flaws. Also, considering Sega turned the hardware down... you have to wonder how long it had been stewing...[/QUOTE]
Square's Hiroshi Kawai had significant difficulties when testing the N64's capabilities against the PlayStation, at a time when Square had not yet made the final decision to go with Sony for FF7.
There was communication between Square and Nintendo about these difficulties, and Nintendo took the step of organizing a trip for Square's programmers, to try to address their concerns:
Hiroshi Kawai said:
http://www.polygon.com/a/final-fantasy-7
...one of my responsibilities ...was to write performance applications that compared how well the 64 fared against the prototype [PlayStation]. And we'd be running parallel comparisons between the [PlayStation] where you'd have a bunch of 2D sprites bouncing off the screen and see how many polygons you could get within a 60th of a second. And even without any kind of texturing or any kind of lighting, it was less than 50% of what you would be able to get out of the [PlayStation]. Of course, the drawback of the [PlayStation] is it didn't really have a
z-buffer, so you'd have these overlapping polygons that you'd have to work around so that you wouldn't get the shimmering [look]. But on the other hand, there was no way you'd be able to get anything close to what FF7 was doing [on PlayStation] on the 64 at that time...
There was actually this one trip that [Nintendo] organized for me, [main programmer Ken] Narita-san, a few other lead devs who were working on the battle portion for the Final Fantasy 6 [Siggraph] demo at that point... I think Nintendo had been getting signals from Square saying, you know, ”Your hardware isn't up to snuff. Not only in terms of raw 3D performance, but in terms of storage." And they said, ”We're gonna fabricate this brand new chip," which was supposed to have a bunch of hardware improvements to get a little bit more performance. Which, my suspicion is they probably just repeated that verbatim from SGI [Silicon Graphics, Inc], and
I think there was, in general, a disconnect between SGI and Nintendo in terms of what they were expecting the hardware to do... But Nintendo had certain specific performance metrics that had to be met, but I don't think those were communicated well to SGI... We spent a few days, I remember, optimizing my code, to try to get a few more polygons out, but it didn't really make much of a difference...
It would certainly be interesting to discuss what was responsible for the difficulties that Kawai/Square were having: perhaps it was the insufficient information/documentation on how to implement custom microcode, perhaps it was the hardware bottlenecks that were never properly addressed by Silicon Graphics, or perhaps (as was suggested by Kawai himself) it was the lack of clear communication between Nintendo's hardware team and Silicon Graphics.
But I do think it's fair to say that there were some shortcomings in the design/implementation of both the N64 hardware and its larger development ‘environment', especially in the early days, which eventually resulted in problems for Nintendo (some of them quite serious). As you and jett discussed (
here and
here), some competent devs like Psygnosis seem to have had difficulties/limitations similar to the ones discussed by Kawai, even as late as 1998 (when Wipeout 64 was released).
The N64 had a weird architecture full of bottlenecks that made it a pain to get good performance from. The near complete lack of sound hardware, for example, required developers to use precious CPU time and RAM to perform software sound synth and mixing. Nintendo [itself] was really soured with the experience, which made them work hard on making the GC architecture as friendly as possible.
Rob Povey said:
https://forum.beyond3d.com/threads/...datory-functions-of-the-hardware-right.53745/
...The main memory [of the N64] had hideous latency, it was organized in 2 banks the idea being that Z would go in one bank and color in the other, in practice that didn't buy you very much, and with Z on fill was very limited. The CPU's small direct mapped cache couple with the poor memory latency was an absolute performance killer. Which is probably why Nintendo went the way they did with [the GameCube]...
I often wonder what the N64 would have been capable of if they addressed the bottlenecks before release and it had CD...
...Nintendo's brand awareness was through the roof at the time, what you're describing would have been an ludicrously powerful system without the costs/size limitations of the N64 we got. Worth noting FFVII probably would have remained an N64 title too. That would have been huge...