This is unlikely as far as the Xbox existed, at most I feel it could have been massively popular with Japanese developers.
Multiplatform western developers were all in on standard features and on-the-fly prototyping/changing of code; Pixel Shader was huge for them as was being able to render 720p (even if they didn't use it, they valued the feature set of being able to as they understood future consoles would be able to do it). Being able to develop games primarily on PC was also something they wanted even back then (PC's are cheaper and more powerful than devkits, if the engine runs on PC you'll never develop primarily on "console environment"), it felt like the future for them. This generation was also the start of middleware, with Unreal Engine, Renderware and the like being available across all consoles. Unreal engine had a exclusive Xbox port (Unreal Engine 2X) with Xbox/DirectX optimizations built in, no doubt because it was based on a PC with a PC GPU. And if you count PC game-engines making the jump to Xbox 1 the engine/middleware list is substantially bigger with all the WRPG's and the like. This bridge was happening because it was doable without increasing costs massively.
Gamecube was was friendly enough, but it was it's own thing, devs tools didn't get in the way (compared to N64 and PS2), but they didn't make the bridge and Nintendo kept a lot of specific tools for their own engine implementations (like the Maya plugin they used for Zelda/Mario games). It was easier to treat GC as a PS2 without bottlenecks, since you had to develop with PS2 in mind anyway rather than like a Xbox. One path was easy sailing, the other wasn't. And sure enough, if you took some GC exclusives there also wasn't enough leeway to brute force them without reworking them on Xbox, but that is true even for PS2 games that relied on the vector unit fillrate - MGS2 Xbox port was crap due to it; hell... Zone of the Enders 2 ports on X360/PS3 years on were crap due to it (PS3 version was revised though, probably through similar SPE use as it didn't make it to X360)
I remember Shinji Mikami saying that on Resident Evil 4 they used the TEV pipeline extensively (TEV being the "semi-Pixel Shader" equivalent on Gamecube), but it was a chore, as they had to make a build to test every little change and would love to be able to make changes on-the-fly instead. This preview feature was implemented eventually in the Wii devkit and deemed as a TEV-Pipeline emulator, but it was late generation. (2009-2010 or so)
This was the Gamecube method:
The cartridge was where developers had to build and put their code in order to test it.
This and the fact studios didn't share a lot of knowledge between them (nor did Nintendo, as they would be reluctant to give "pre-written shaders" to third parties) was one of the reasons the TEV pipeline was so heavily underutilized. Other part of the equation on the Wii, despite Nintendo providing more support documentation and help on that end, was that developers didn't see it as useful, because it wasn't the future, it was a dead end for them.
If the Wii had Pixel Shader compliant abilities (and a feature set more in line with Xbox), western developer acceptance and results would have been much, much better. (not to say good, but better)
This on-the-fly prototyping was already possible on Xbox, since it was pixel shader compliant so the same as PC as was the rest of it's architecture for all means and purposes. Devs also saw Pixel Shader as relevant experience going forward, and I have to say, it was, a decade later Nintendo was hiring people with "shader experience" (sometimes through a moniker like "HD development experience") when they launched the Wii U.
Someone that worked on Xbox games of the era would have a better notion of conventional Shader inner workings than anyone that only worked on PS2, Gamecube and Wii.