• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Wii U CPU |Espresso| Die Photo - Courtesy of Chipworks

They've already fundamentally changed UE4 to fit within the console spec. An engine is an engine. The game you run on the engine is what matters.

No they haven't. Depending on your definition of that word. It's completely scalable within that spec, thats not fundemtally changing anything. I remember reading that UE4 baseline spec was about 1tflop gpu. I'm assuming UE4 can scale all the way from 1tflop gpu to 5tflop+. Wii U GPU doesn't even come close to 1tflop, still I bet memory is the bigger hindrance, even with 3gb in the debug environment.

I would also speculate that Wii U's bizarre gpu with possible secret sauce fixed function hardware might not jive really well with an engine like UE4. Wii U really may need its own custom engines to get the most out of it. Nintendo didn't really design the system with 3rd parties in mind IMO.
 

KageMaru

Member
They've already fundamentally changed UE4 to fit within the console spec. An engine is an engine. The game you run on the engine is what matters.

I really don't think you can compare the changes made to fit UE4 on the PS4/Durango to the changes that would be necessary to fit the engine on the Wii-U.

PS4/Durango can at least get close to the original look of UE4, do you think the same would be said for the engine on Wii-U?
 

StevieP

Banned
I really don't think you can compare the changes made to fit UE4 on the PS4/Durango to the changes that would be necessary to fit the engine on the Wii-U.

PS4/Durango can at least get close to the original look of UE4, do you think the same would be said for the engine on Wii-U?

I wasn't aware that engines had "looks" :)

I always thought it was a set of tools/code/scripts/etc that allowed a game/demo/what have you to run and get it all up on a piece of hardware that's supported by said engine, whether that game is a rubix cube in a flat room or Gears of War 6, 1080p/60 3D edition.

The "1tf is where things get interesting" quote reflects that, and its scalability - as most engines are (including CE3 and FB3).

Now, if you're asking whether the Wii U is anywhere near what Epic wants to showcase and third parties would use the engine for? "Hahaha, no"

Doesn't this discussion belong in the other thread? Or is that one still unbearably bad and full of useless trolling?
 

KageMaru

Member
I wasn't aware that engines had "looks" :)

I always thought it was a set of tools/code/scripts/etc that allowed a game/demo/what have you to run and get it all up on a piece of hardware that's supported by said engine, whether that game is a rubix cube in a flat room or Gears of War 6, 1080p/60 3D edition.

The "1tf is where things get interesting" quote reflects that, and its scalability.
Now, if you're asking whether the Wii U is anywhere near what Epic wants to showcase and third parties would use the engine for? "Hahaha, no"

We're both well aware that an engine is composed of more than just the renderer, but from what I understand, some of the changes to the UE4 tools (which were said to be similar in ways to UE3 IIRC) are the direct result of what the renderer is and is not capable of now.

Basically why bother porting the engine to the system if the desired results won't be produced? Much better to just use UE3 since we don't know if one of the aspects of UE4 that would need to be cut would be a fundamental part of the renderer or how the engine operates IMO.

I'm rushing on my out from work now, so I probably explained this rather poorly now, but hopefully you get the idea. =p
 
We're both well aware that an engine is composed of more than just the renderer, but from what I understand, some of the changes to the UE4 tools (which were said to be similar in ways to UE3 IIRC) are the direct result of what the renderer is and is not capable of now.

Basically why bother porting the engine to the system if the desired results won't be produced? Much better to just use UE3 since we don't know if one of the aspects of UE4 that would need to be cut would be a fundamental part of the renderer or how the engine operates IMO.

I'm rushing on my out from work now, so I probably explained this rather poorly now, but hopefully you get the idea. =p

Yes. So basically if a developer goes that route, your gonna have completely different team developing the game using UE3, and at that point it's not really a port. Will be similar to what devs did with Wii in the beginning of this console gen. I really don't see many games that are using UE4 making ther way to Wii U. Even with developing the game separately using UE3, it going to be pretty costly, but less so than recoding all of UE4. Add in the fact Wii U isn't selling that great, the ROI is just not great enough(at least for the majority of publishers, some may be willing to take the chance.)
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Yes. So basically if a developer goes that route, your gonna have completely different team developing the game using UE3, and at that point it's not really a port. Will be similar to what devs did with Wii in the beginning of this console gen. I really don't see many games that are using UE4 making ther way to Wii U. Even with developing the game separately using UE3, it going to be pretty costly, but less so than recoding all of UE4. Add in the fact Wii U isn't selling that great, the ROI is just not great enough(at least for the majority of publishers, some may be willing to take the chance.)
How many engines have you recoded? I'm not picking on you, just being curious.
 

BaBaRaRa

Member
I don't mean to keep this thread moving further away from CPU's, but how often will a team 'recode' parts of an engine for platforms that are already supported?

It would be good to be able to frame the whole 'port to WiiU argument' by knowing how many internal 'forks' of engines are already out there. I don't know the licensing around UE, and what's allowed, but I presume it must be fairly permissive if the option exists for someone to port it to new hardware.

For all I know everyone uses stock UE; but I'd be interested to know either way. If it's fairly common practice to have "the engine guy" who sits in the corner (ignored because everyone's too busy praising the guy who animated catwoman's ass), then it really may not be that big a deal to have "the other engine guy" who makes the drastically cut down assets work on a WiiU at a passable framerate.
 

joesiv

Member
I don't mean to keep this thread moving further away from CPU's, but how often will a team 'recode' parts of an engine for platforms that are already supported?

It would be good to be able to frame the whole 'port to WiiU argument' by knowing how many internal 'forks' of engines are already out there. I don't know the licensing around UE, and what's allowed, but I presume it must be fairly permissive if the option exists for someone to port it to new hardware.
I've never worked on a project that used unreal, just so you know, but I'm under the understanding that developers get access to the source code of the unreal engine ($99 version though). Developers are able to tweak what they want, but realize that once you tweak it, you lose a lot of the support that comes with the main branch from Epic.

For all I know everyone uses stock UE; but I'd be interested to know either way. If it's fairly common practice to have "the engine guy" who sits in the corner (ignored because everyone's too busy praising the guy who animated catwoman's ass), then it really may not be that big a deal to have "the other engine guy" who makes the drastically cut down assets work on a WiiU at a passable framerate.
Most people would likely run stock unreal, and request fix's for issues, and issue support requests as needed. If a platform wasn't supported, obviously they'd have to add that if Epic wasn't going to do it. Though, it's possible that there is a feedback mechanism for such work that Epic can collect, and eventually issue in it's official support.

From my experience working on other projects that used custom engines, generally yes, there would dedicated programmers for specific platforms, working with the problem areas and ensuring compatibility and performance, though this was usually on the outside of the main development being on a "lead platform" In some cases Xbox 360 was a lead platform, and the PS3 was the one that had the platform specific programmer(s). THEN, sometimes you'd have a "guru" that really dug into the deep engine parts fine tuning machine code for the specific platforms to get even better performance.

To bring it back to CPU's (somewhat), I recall a conversation I had with one such "guru" at Radical Entertainment who knew the Gamecube's architecture inside and out, at the time we were developing Prototype for PS3/Xbox360, and he said that the Prototype could compile and run on the Wii (minus audio as at the time no effort had been made specifically for the Wii support in the Titanium Engine), though it would be at a very low framerate. This was very early in development though, and much went into the game since then that might not have compiled without much effort.
 

USC-fan

Banned
KB smoker: it's not hardware related. Its business. return on investment.
Edit: I guess in that sense you can say it is hardware related, as in "we don't see the need to directly support UE4 on that piece of hardware because we don't see much or any game licensees purchasing it for that reason, but we will certainly adapt it downward for the high end consoles that have a lower spec than we wanted"

I think maybe lherre should pop by the thread.
Back to the CPU discussion please.
it based on hardware only. Epic ported ue3 to wiiu so fast they had launch games.

Show me anything that back up your theory? I seen no statement from epic. Its really silly to think epic would waste time porting a deab engine and not the new one if it could.

It just like dice said. You just have to set the bar and not support hardware below that.

Business had nothing to do with it. Ue3 was there day 1....
 
How many engines have you recoded? I'm not picking on you, just being curious.

NONE. but its obvious that it takes a lot of time. The whole point of an engine like UE4 is to reduce dev time, which reduces costs significantly. I mean look at all these new talks at GDC regarding engine like FOX, and easiness of programming for PS4. Its all about easiness, and investing less time, but still being able to get a high quality product out the door. Time=money.

The only personal experience I have regarding this, is as a tester. From that point of view you really do get a good glimpse how the overall quality of the product is completely affected by time and money. From the production side, they have every hour, every tester, calculated down to the last dollar. Usually games have around 2,000 waived bugs. GT games have about 5k+ for example. It requires to much coding time to fix all those, so you have to do it by priority.

One of the major ways you improve this whole process is by being more efficent. Its easy to see why a good place to start would be the tools and engine. Re-coding the whole UE4 engine to work on Wii U isnt an efficient use of time and money.

Theres 2 options they really have when developing on UE4 game on PS4/720, when considering a Wii U version.

1. Develop the game on the UE3 platform(I would like to know if assets would be able to be carried over if they did this, or do they need to recreate everything? I'm thinking they could)

2. Don't develop it on Wii U at all.

UE4 isn't even an option IMO, unless Epic decides to natively support the platform in UE4. Which supposedly isnt going to happen.

I don't mean to keep this thread moving further away from CPU's, but how often will a team 'recode' parts of an engine for platforms that are already supported?

I don't think modifying, or building on top of an enigne, is the same thing as recoding the base fundemental code of the engine, which is what would be nessesary to get it to work on the Wii U. Plus removing every software feature/bell and whistle Wii U isn't incapable of running. I mean games do the former all the time. UE3 in Bioshock infinite is heavily modified for example.

My whole point is really, what is the point of porting a UE4 game to Wii U? There isn't one. Its pointless, your loosing out on all the advantages of using UE4, loosing all the bells and whistles, and it would be a huge waste of money. Just build it with UE3, which is already supported, which means you'll get your game up and running much quicker. If that ends up sacrificing the vision of the game too much, or ends up costing too much extra money cause your developing your game with TWO different engines/toolsets, then you dont make it all.
 
NONE. but its obvious that it takes a lot of time. The whole point of an engine like UE4 is to reduce dev time, which reduces costs significantly. I mean look at all these new talks at GDC regarding engine like FOX, and easiness of programming for PS4. Its all about easiness, and investing less time, but still being able to get a high quality product out the door. Time=money.

The only personal experience I have regarding this, is as a tester. From that point of view you really do get a good glimpse how the overall quality of the product is completely affected by time and money. From the production side, they have every hour, every tester, calculated down to the last dollar. Usually games have around 2,000 waived bugs. GT games have about 5k+ for example. It requires to much coding time to fix all those, so you have to do it by priority.

One of the major ways you improve this whole process is by being more efficent. Its easy to see why a good place to start would be the tools and engine. Re-coding the whole UE4 engine to work on Wii U isnt an efficient use of time and money.

Theres 2 options they really have when developing on UE4 game on PS4/720, when considering a Wii U version.

1. Develop the game on the UE3 platform(I would like to know if assets would be able to be carried over if they did this, or do they need to recreate everything? I'm thinking they could)

2. Don't develop it on Wii U at all.

UE4 isn't even an option IMO, unless Epic decides to natively support the platform in UE4. Which supposedly isnt going to happen.



I don't think modifying, or building on top of an enigne, is the same thing as recoding the base fundemental code of the engine, which is what would be nessesary to get it to work on the Wii U. Plus removing every software feature/bell and whistle Wii U isn't incapable of running. I mean games do the former all the time. UE3 in Bioshock infinite is heavily modified for example.

My whole point is really, what is the point of porting a UE4 game to Wii U? There isn't one. Its pointless, your loosing out on all the advantages of using UE4, loosing all the bells and whistles, and it would be a huge waste of money. Just build it with UE3, which is already supported, which means you'll get your game up and running much quicker. If that end up sacrificing the vision of the game too much, or ends up costing too much extra money cause your developing your game with TWO different engines/toolsets, then you dont make it all.

Epic is adverting that UE4 has improved tools that will make it easier to develop games in general. Epic is planning to bring UE4 to phones and tablets, so UE4 is already designed with scalability in mind.

Also, something else to consider: according to lherre,

Only for your info, UE3 is no longer developed/updated for consoles since July (it's still in development for pc or phones, but I think it won't be too much at all). The July build is the latest UE3 build for consoles. So the development for wii U (or other current consoles) is the one in this build, it won't be improved. At least officially by Epic, you always can try to update/integrate the changes/improvements by yourself.

If that is true, Epic is going to seriously push people to use UE4 instead of sticking with UE3 even if you are not planning to push its graphical capabilities. Lherre also state that there is no official UE3 support for the other next-gen consoles, so cross-generational games would benefit if they can stick with UE4 for all consoles.

This topic wouldn't even be discussion if Epic was more honest on what they are doing: there have been several sources that hinted that Wii U and current-gen consoles are listed for support in the UE4 documents..
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
NONE. but its obvious that it takes a lot of time.
See, the main reason I asked is your rather definitive statement that re-doing the game in UE3 would be cheaper. That's what actually caught my curiosity. So let's see..

The only personal experience I have regarding this, is as a tester. From that point of view you really do get a good glimpse how the overall quality of the product is completely affected by time and money. From the production side, they have every hour, every tester, calculated down to the last dollar. Usually games have around 2,000 waived bugs. GT games have about 5k+ for example. It requires to much coding time to fix all those, so you have to do it by priority.

One of the major ways you improve this whole process is by being more efficent. Its easy to see why a good place to start would be the tools and engine. Re-coding the whole UE4 engine to work on Wii U isnt an efficient use of time and money.

Theres 2 options they really have when developing on UE4 game on PS4/720, when considering a Wii U version.

1. Develop the game on the UE3 platform(I would like to know if assets would be able to be carried over if they did this, or do they need to recreate everything? I'm thinking they could)

2. Don't develop it on Wii U at all.

UE4 isn't even an option IMO, unless Epic decides to natively support the platform in UE4. Which supposedly isnt going to happen.
Well, the only personal experience I have with this is doing engine R&D and maintenance in the course of over 10 years. You may chose to ignore when I say the following, but I'll say it nevertheless:

The situations where it would be cheaper to redo a game in another engine/a major engine revision are very rare, not to say exceptional (apparently 'a major game engine revision' might need further clarifications, but if Epic are to be believed, UE4 will offer more than a few content pipeline advancements). Game engine teams are normally a handful of people. Game development teams sit at an order of magnitude greater numbers. Engines get ported across platforms by even fewer developers - there's normally a 'platform X' sub-team within (or outside) the engine team.

Last but not least engine licensees (Epic licensees in particular..) do engine ports and major modifications at whim. 'Unless Epic decides to support a platform natively' is really a marginal factor here. It's more a matter of 'do we already have a license for this UE or not'.

I don't think modifying, or building on top of an enigne, is the same thing as recoding the base fundemental code of the engine, which is what would be nessesary to get it to work on the Wii U. Plus removing every software feature/bell and whistle Wii U isn't incapable of running. I mean games do the former all the time. UE3 in Bioshock infinite is heavily modified for example.
I don't think we should delve much into the 'recoding the base fundamental code' (whatever you might think that involves) topic here, given neither of us knows what that would imply in this particular case. Apparently the lighting system of UE4 was 'non-fundamental enough' for Epic to get rid of it at the drop of a hat for a PS4 demo event.

My whole point is really, what is the point of porting a UE4 game to Wii U? There isn't one. Its pointless, your loosing out on all the advantages of using UE4, loosing all the bells and whistles, and it would be a huge waste of money. Just build it with UE3, which is already supported, which means you'll get your game up and running much quicker. If that ends up sacrificing the vision of the game too much, or ends up costing too much extra money cause your developing your game with TWO different engines/toolsets, then you dont make it all.
And my whole point is, your premise that porting a game to a different engine is cheaper than porting an engine to a new platform, is rather flawed.
 

krizzx

Junior Member
You know, I've been thinking about the comment I made earlier about the clock speed.

It was rumored initially that the CPU was 3.6 GHz and technically 1.2 Ghz with 3 cycles per second could be viewed as 1.2x3 or3.6 Ghzvs the 1.6x2 or 3.2 Ghz mathematically? Would that entail that the raw power of the Wii U's CPU is higher than Xenos?

Besides clock speed, cycle rate and being out-of-order, what other features does the Wii U CPU possess that are not present in the Xenos?
 
It was rumored initially that the CPU was 3.6 GHz and technically 1.2 Ghz with 3 cycles per second could be viewed as 1.2x3 or3.6 Ghzvs the 1.6x2 or 3.2 Ghz mathematically?

No. It'd be wrong and luckily nobody uses such a notation.
Btw. the CPU completes up tp two operations per cycle.

Would that entail that the raw power of the Wii U's CPU is higher than Xenos?

No, due to its 128-bit SIMD units and higher clock rate Xenon's peak performance is much larger (~15 GFLOPs vs. 96 GFLOPs).
 

krizzx

Junior Member
No. It'd be wrong and luckily nobody uses such a notation.
Btw. the CPU completes up tp two operations per cycle.



No, due to its 128-bit SIMD units and higher clock rate Xenon's peak performance is much larger (~15 GFLOPs vs. 96 GFLOPs).

15 GFLOPs? How did you determine that?

That doesn't make any form of sense. Even the guy who actually uncovered the clock speed said it wasn't that much different than the 360 CPU in performance.

Its not simply completes 2. Its takes 3 then retires 2. Also, that is only info for the single core PPC750. That is not the documentation for Expresso which is only in the same family of chips. It is no more the same than Pentium 3 is the same as a Pentium 1, or a Pentium D is a Pentium 3. If we had all the documentation on Espresso to make a hard claim like that then there would no point to this thread.

You are denying Y outright and saying one is better because of X without saying how X makes it better. That is a really shallow response and doesn't answer any of my questions. Do you have anything to back up these claims?


I think there is a lot more to it than that. There is no question that espresso, watt for what does more than the Xenos or the Cell do. The question I wanted asked it to what extent does it go. Exactly how capable is it comparatively in real world performance.
 
15 GFLOPs? How did you determine that?

Quite simple: 1.24 GHz * 3 (cores) * 2 (paired singles) * 2 (FMA)

That doesn't make any form of sense. Even the guy who actually uncovered the clock speed said it wasn't that much different than the 360 CPU in performance.

You are saying one is better because of X without saying how X makes it better. That is a really shallow response.

You asked for "raw power" and I provided the information.

Of course these numbers are highly theoretical and in most cases don't reflect actual average performance. Whether Xenon or Espresso is faster depends on the application.
My opinion: In usual gaming performance they are quite similar, that's also what most multiplatform titles suggest so far.
 

krizzx

Junior Member
Quite simple: 1.24 GHz * 3 (cores) * 2 (paired singles) * 2 (FMA)



You asked for "raw power" and I provided the information.

Of course these numbers are highly theoretical and in most cases don't reflect actual average performance. Whether Xenon or Espresso is faster depends on the application.
My opinion: In usual gaming performance they are quite similar, that's also what most multiplatform titles suggest so far.

Once again, I doubt its that simple. The actual performance is has demonstrated in games is higher than that would allow. Expresso is not simply 3 Broadways sandwiched on a die with a slightly higher clock. I'm pretty sure that was ruled out as nearly impossible earlier.

There is much more about the espresso cores that is different than the Broadway core.

espresso_annotated.jpg
broadway_annotated.jpg
 

Mithos

Member
Also, something else to consider: according to lherre,
Only for your info, UE3 is no longer developed/updated for consoles since July (it's still in development for pc or phones, but I think it won't be too much at all). The July build is the latest UE3 build for consoles. So the development for wii U (or other current consoles) is the one in this build, it won't be improved. At least officially by Epic, you always can try to update/integrate the changes/improvements by yourself.

If that is true, Epic is going to seriously push people to use UE4 instead of sticking with UE3 even if you are not planning to push its graphical capabilities. Lherre also state that there is no official UE3 support for the other next-gen consoles, so cross-generational games would benefit if they can stick with UE4 for all consoles.


I think this will be quite bad for games using UE3 any improvements that will be made now obviously will be unique to that game being developed.
IF Epic do not tend to update/optimize the UE3 engine for Wii U, maybe they should open it up for every developer to commit fixes/updates/optimization into the official build.
 

krizzx

Junior Member
Just an fyi, but the definition of Hz is cycles per second.

I know. That was a typo. I meant instructions.



How long are people going to be arguing about the Unreal Engine in here? It will go nowhere because most people are only arguing towards a biased slant and there is nothing concrete to give headway to either side.

I'd honestly think that the announcement that epic won't bring it would make it one of the most "irrelevant" things to the Wii U. Why people insist on making an issue out of it is beyond me.

How much total cache does Espresso have compared to Broadway?
 
Once again, I doubt its that simple. The actual performance is has demonstrated in games is higher than that would allow.

Believe what you will. I think it's a very reasonable assumption that, from a performance standpoint, Espresso is what you'd expect from a 3 core Broadway with larger caches.

edit:
How much total cache does Espresso have compared to Broadway?

3MB (configured as 2 MB for one core and 512 KB for the others) vs 256 KB
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
You know, I've been thinking about the comment I made earlier about the clock speed.

It was rumored initially that the CPU was 3.6 GHz and technically 1.2 Ghz with 3 cycles per second could be viewed as 1.2x3 or3.6 Ghzvs the 1.6x2 or 3.2 Ghz mathematically? Would that entail that the raw power of the Wii U's CPU is higher than Xenos?
No.

Besides clock speed, cycle rate and being out-of-order, what other features does the Wii U CPU possess that are not present in the Xenos?
Perhaps I'm misinterpreting you here, but you're not implying that Xenos does not have clock speed, right? And cycle rate is?

No. It'd be wrong and luckily nobody uses such a notation.
Btw. the CPU completes up tp two operations per cycle.
Two ops plus a branch resolve (usually written as 2+1).
 

The_Lump

Banned
it based on hardware only. Epic ported ue3 to wiiu so fast they had launch games.

Show me anything that back up your theory? I seen no statement from epic. Its really silly to think epic would waste time porting a deab engine and not the new one if it could.

It just like dice said. You just have to set the bar and not support hardware below that.

Business had nothing to do with it. Ue3 was there day 1....

Oh, right. *rollseyes*

Don't be ridiculous. Everything a company does is business related. This isn't a damn playground popularity contest: The industry exists outside of gaming forums dude.

UE3 was there day 1 because UE3 was a well established, widely used engine leading up to day one.
 
I think this will be quite bad for games using UE3 any improvements that will be made now obviously will be unique to that game being developed.
IF Epic do not tend to update/optimize the UE3 engine for Wii U, maybe they should open it up for every developer to commit fixes/updates/optimization into the official build.
Not that this is really the thread for it but when I read Mark Rein's statement regarding UE4's lack of direct support for the Wii U I thought it made sense as I don't believe that the Wii U's philosophy is really in line with that Epic is trying to push, which is bleeding edge graphic technologies.

But hearing this new information, which is basically that UE3 is a dead platform, it's hard to take Epic's stance on this seriously. It would be like MS saying that your new low end PC isn't suitable for Win7/8 so you should install Windows XP on it instead with the full knowledge that XP isn't going to be supported/updated going forward. In other words, if you were thinking about developing for the Wii U then don't bother.

If the Wii U's hopes for 3rd party development are going to rest on publishers creating Wii U specific engines and separate teams to work only on that engine then this whole thing was over before it started.
 

joesiv

Member
But hearing this new information, which is basically that UE3 is a dead platform, it's hard to take Epic's stance on this seriously. It would be like MS saying that your new low end PC isn't suitable for Win7/8 so you should install Windows XP on it instead with the full knowledge that XP isn't going to be supported/updated going forward. In other words, if you were thinking about developing for the Wii U then don't bother.
The only reason you would want MS to support your operating system is due to security vulerabilities and patches to fix that. I bet a lot of people would happily still be running windows XP (and 2000 before that), if MS was still offering security patches for it.

A game engine on the other hand isn't so critical on security issues, and besides we don't as far as I know, know if it's "support" that will end for UE3, or if it's just future development, as there is a big difference. Support, would be critical for future games, as if there are existing major bugs, they'd need that support infrastructure, but feature requests on the otherhand, aren't needed.

Taking your windows analogy, Microsoft stopped developing for Windows 2000/XP/etc... far before they stopped supporting it (patching) it.
 
You know, I've been thinking about the comment I made earlier about the clock speed.

It was rumored initially that the CPU was 3.6 GHz and technically 1.2 Ghz with 3 cycles per second could be viewed as 1.2x3 or3.6 Ghzvs the 1.6x2 or 3.2 Ghz mathematically? Would that entail that the raw power of the Wii U's CPU is higher than Xenos?

Besides clock speed, cycle rate and being out-of-order, what other features does the Wii U CPU possess that are not present in the Xenos?

It's Xenon, Xenos is the GPU.

You seem to be under the misapprehension that there is something remarkable about the WiiU CPU's ability to issue more than one instruction per cycle. Are you not aware that the 360 CPU is dual issue? By your logic it is like each CPU core in the 360 is a 6.4GHz processor because it can technically do 2 instructions per cycle! To really blow your mind, the Jaguar cores in the PS4 and Durango are each capable of like 4 or 5 instructions per cycle.

In reality, it isn't just about how many instructions you can issue in a cycle, it's about how efficiently the CPU can actually execute those instructions, AND about how much work each of those instructions actually does (for example, the 128bit SIMD capabilities of Xenon and Jaguar compared to the 32 bit paired singles in Espresso). Then you start thinking about actual clock speed, number of cores, memory bottlenecks, etc. Espresso may be more efficient with certain kinds of code than Xenon, but it still gets destroyed in any that requires heavy vector math.
 

Log4Girlz

Member
KB smoker: it's not hardware related. Its business. return on investment.
Edit: I guess in that sense you can say it is hardware related, as in "we don't see the need to directly support UE4 on that piece of hardware because we don't see much or any game licensees purchasing it for that reason, but we will certainly adapt it downward for the high end consoles that have a lower spec than we wanted"

Yes, the hardware and business proposition are entwined. They still left it up to whoever purchases the engine to gut it to run on the Wii U.
 

krizzx

Junior Member
It's Xenon, Xenos is the GPU.

You seem to be under the misapprehension that there is something remarkable about the WiiU CPU's ability to issue more than one instruction per cycle. Are you not aware that the 360 CPU is dual issue? By your logic it is like each CPU core in the 360 is a 6.4GHz processor because it can technically do 2 instructions per cycle! To really blow your mind, the Jaguar cores in the PS4 and Durango are each capable of like 4 or 5 instructions per cycle.

In reality, it isn't just about how many instructions you can issue in a cycle, it's about how efficiently the CPU can actually execute those instructions, AND about how much work each of those instructions actually does (for example, the 128bit SIMD capabilities of Xenon and Jaguar compared to the 32 bit paired singles in Espresso). Then you start thinking about actual clock speed, number of cores, memory bottlenecks, etc. Espresso may be more efficient with certain kinds of code than Xenon, but it still gets destroyed in any that requires heavy vector math.


No, I am under no such misapprehension. I was asking for an explanation of what the differences were in real world, pratical terms. This was the only purpose of the comment. No one was answering or making any post in the thread at all, so I tried to do my own math on the subject given the details I have seen.

My only interest is to learn exactly how much of an upgrade this CPU is from the Wii's and how it performs comparitively from the last gen HD twins in real world capability/results , as opposed to theoretical numbers. What I'm getting instead is the usual console war defense mechanisms of people trying to defend their more preferred hardware by using incomplete out of context comparisons and ignoring unfavorable details. I have no fantasies of the Expresso being another Cell, though I am curious as to some differences between it and the Cell.

I am not asking this out of fanboyism. I'm asking because I want to learn exactly what this CPU can do. I do not care what people feels will "destroy" this or "blow your mind" about that. Adjective appraisals like that mean nothing to me. I am primarily a PC gamer. I have a PC that will dwarf any of these consoles. I am not impressed by them. I assure you they will "not" blow my mind. What I've seen technically has disappointed me on all of them, especially the PS4. I only buy consoles and games that offer me things that I cannot get on the PC. So far, only the Wii U has provided that this gen, and there have been few examples of it.

On a technical level it interests me more to see what people can squeeze out of something weak like the old voodoo cards running Doom 3, than it does for something boasting huge specs to show a bunch of easily done effects.

Let me return to the topic topic though. I have digressed to far.



Is it correct to say that an SPE is 1/3 of a core correct? I remember reading a diagram long ago when studying the Cell that listed the components of a CPU core, and the unit that the Cell SPE contains was 1 of the 3 main components of a full core,(the arithmetic logic unit I believe?).

Also, I remember people stating earlier that Some of the CPU cores in the 360 cpu and some of its RAM were tied up in processing audio and running the OS/security. Exactly how much of each?
 
You can protest all you like, but you keep popping into WiiU tech threads spouting barely understood technical terms in a vain attempt to paint it as having some kind of advantage.

As for the SPEs, talking about it as "1/3rd of a core" is pretty meaningless. Each is a fully independent CPU core with a purposefully limited instruction set and a small, but very fast space of working memory. Since you aren't supposed to use them like a conventional CPU core it isn't particularly helpful to try and make those kinds of comparisons. They are capable of both floating point and integer calculations, but they are strongly suited to 128 bit vector math. Individually, each SPE in the PS3 is capable of about 26 GFlops. That's 11 more GFlops for one SPE than all three Espresso cores combined.

And yes, in the 360 (and PS3) there is no dedicated sound hardware. Developers are free to devote as much or as little of the system's resources to audio as they want.
 

StevieP

Banned
And yes, in the 360 (and PS3) there is no dedicated sound hardware. Developers are free to devote as much or as little of the system's resources to audio as they want.

I feel you're understating this point. There are many cases where audio took one or 2 threads away (which in the case of the 360, isn't trivial).
 

krizzx

Junior Member
You can protest all you like, but you keep popping into WiiU tech threads spouting barely understood technical terms in a vain attempt to paint it as having some kind of advantage.

As for the SPEs, talking about it as "1/3rd of a core" is pretty meaningless. Each is a fully independent CPU core with a purposefully limited instruction set and a small, but very fast space of working memory. Since you aren't supposed to use them like a conventional CPU core it isn't particularly helpful to try and make those kinds of comparisons. They are capable of both floating point and integer calculations, but they are strongly suited to 128 bit vector math. Individually, each SPE in the PS3 is capable of about 26 GFlops. That's 11 more GFlops for one SPE than all three Espresso cores combined.

And yes, in the 360 (and PS3) there is no dedicated sound hardware. Developers are free to devote as much or as little of the system's resources to audio as they want.

I'm not spouting any technical terms. I'm "asking questions" about technical features. Apparently, you and a few others seem take heavy offense to this for some reason with the way you get so defensive every time I ask a question that contrasts the the console hardware or about the technical limits of the PS3/360 hardware. All I want are answers.

Apparently, anyone who doesn't express exreme hatred for Nintendo and puts them down constantly is considered a Nintendo fanboy on the internet these days. I have no interest in your console war. As I said, I am a PC gamer. You are the ones trying to make it into something more.

There are only 2 Wii U tech threads and of course I "keep popping up" in them. I'm interested in learning about them. Should I pop up in a Vita software threads, amiddleware game engine thread, or something else I equally care nothing about instead? Would you like that more? Could you please stop with the personal attacks, and assumptions? I am not here for a console penis showing contests. I just want to know about the capabilities of the hardware components.
 

wsippel

Banned
That's strange, have they hired more people for NERD? I figured they'd continue on with video/audio compression and soforth given the background of Mobiclip.
Yes, they've hired several engine programmers in 2012, including the former head of engine R&D at Eugen Systems.
 

krizzx

Junior Member
If all you were interested in was honest answers you wouldn't keep disputing factual answers as console war bullshit.

What factual numbers have I disputed? I don't recall ever disputing anything posted from factual official documentation or from multiple professional sources, and the only way I would do so is if there were many other sources citing inaccuracy of some sort or misinterpretation of numbers. I may have asked for elaboration on something that I did not understand or it does not make sense, but I will not dispute fact. Apparently, you have an issue with people questioning or fact checking you.

Yoru opinions and beliefs, however, are not "factual" answers. Fallacies abound. An honest "answers" =/= and "factual answer". They are two different things. You may believe what you are saying but I have no reason to unless you are a "professional" with credentials or you are fully citing one.

Also, I said I was only asking questions, I didn't say what they were limited to because I am asking for numerous things, primarily analysis of real world performance as opposed to raw numbers that often tell a different tale. There are few "factual answers" for that. Not with the info that has been derived to date. Only theoretical possibilities can be determined at this point, and I want to know what those are.

I only ask how to performance related to the 360/PS3 hardware because "that is what's out to compare it to" and it will give a better estimate of performance than throwing darts in the dark. The same with why I posted the elebits and boom blocks videos a while back. I want a "real world performance" analysis. I do not understand the reason for of all of this hostility. Its like people take remarks about their console that could potentially be unfavorable as a declaration of war.
 

krizzx

Junior Member
Back on topic.



How was the Gflop performance of the Xbox1 CPU performance vs. Gekko and Gekko vs Broadway? I can't find Gflop listing for the Xbox1 CPU and Wii CPU. Only 1.9 GFLOPS for Gekko.

Then there is that chart was posted a while back either in this thread of the GPU thread that showed the huge overall performance increase of the 2007 PPC750 vs. the older one which boasted higher specs in pretty much all department. I'm curious as to whether the espresso could have also been made with extremely more efficient components that produce performance higher than what the simple increases at a glance would dictate.

Also, another things posted by lostinblue that raises questions about the "3 broadways snadwhich together" theory.

I believe it was off the shelf like that, multiplier is locked so it would mean messing with FSB otherwise. Notice that kit has the same FSB number for both 1 and 1.1 GHz configurationsI don't know about that, but it certainly is.



This architecture also kinda loses it's power effectiveness if clocked too high, as illustrated by this PPC750 CL table:

The PPCCL733 is clearly Broadway. If it was simply just 3 broadways then the power draw would should increase by 3 times as well to around 12 watts. This demonstrates that is eats much more power as the clock increases so that combined with the incrase from 733 Mhz to 1.2 Ghz would yield nearly a 25-30w draw. That is without taking in to consideration things like the increased cache on Espresso.

We know that the active load of the Wii U is 45 watts if I'm not mistaken. The estimated performance watt draw is 7 watts for expresso. Just clocking a regular single core PPC750CL to 1 Ghz yields a higher power draw than that.

From what I'm seeing, there is no way the Wii U CPU can simply be 3 Broadways on a single die. Am I missing something?
 
Back on topic.




How was the Gflop performance of the Xbox1 CPU performance vs. Gekko and Gekko vs Broadway? I can't find Gflop listing for the Xbox1 CPU and Wii CPU. Only 1.9 GFLOPS for Gekko.

Then there is that chart was posted a while back either in this thread of the GPU thread that showed the huge overall performance increase of the 2007 PPC750 vs. the older one which boasted higher specs in pretty much all department. I'm curious as to whether the espresso could have also been made with extremely more efficient components that produce performance higher than what the simple increases at a glance would dictate.

Also, another things posted by lostinblue that raises questions about the "3 broadways snadwhich together" theory.


The PPCCL733 is clearly Broadway. If it was simply just 3 broadways then the power draw would should increase by 3 times as well to around 12 watts. This demonstrates that is eats much more power as the clock increases so that combined with the incrase from 733 Mhz to 1.2 Ghz would yield nearly a 25-30w draw. That is without taking in to consideration things like the increased cache on Espresso.

We know that the active load of the Wii U is 45 watts if I'm not mistaken. The estimated performance watt draw is 7 watts for expresso. Just clocking a regular single core PPC750CL to 1 Ghz yields a higher power draw than that.

From what I'm seeing, there is no way the Wii U CPU can simply be 3 Broadways on a single die. Am I missing something?

Well espresso is on a smaller process than Broadway so will use less power also I believe using edram for the l2 cache reduces power consumption as well
 
How was the Gflop performance of the Xbox1 CPU performance vs. Gekko and Gekko vs Broadway? I can't find Gflop listing for the Xbox1 CPU and Wii CPU. Only 1.9 GFLOPS for Gekko.
A Pentium III Coopermine @ 1 GHz amounted to 2 GFlops, so we're talking about 1.4/1.5 GFlops or so for 733 MHz Xbox Coopermine part with half the L2 cache.

I reckon Microsoft over inflated it back then though, all the way to 2.9/3 GFlops which were obviously fake.

Gekko was a little bit better in FLOPS. (Celeron had SSE though which might have helped on code/ports that used it; just like Gekko could be helped by it's "50 SIMD instructions")
Then there is that chart was posted a while back either in this thread of the GPU thread that showed the huge overall performance increase of the 2007 PPC750 vs. the older one which boasted higher specs in pretty much all department. I'm curious as to whether the espresso could have also been made with extremely more efficient components that produce performance higher than what the simple increases at a glance would dictate.
Probably not. 750CL/Broadway seem to be a direct evolution of Gekko, so no major toolset revision took part, that also means compatibility stays the same without any caveats; of course they changed the design a bit this time around, for stuff like the cache and 3-SMP implementation, but they seemed to be determined to keep it 100% compatible nonetheless and in not adding more logic to them; so they should perform the same.

I hope they added some new SIMD instructions, but that'll be it.
The PPCCL733 is clearly Broadway. If it was simply just 3 broadways then the power draw would should increase by 3 times as well to around 12 watts. This demonstrates that is eats much more power as the clock increases so that combined with the incrase from 733 Mhz to 1.2 Ghz would yield nearly a 25-30w draw. That is without taking in to consideration things like the increased cache on Espresso.

We know that the active load of the Wii U is 45 watts if I'm not mistaken. The estimated performance watt draw is 7 watts for expresso. Just clocking a regular single core PPC750CL to 1 Ghz yields a higher power draw than that.

From what I'm seeing, there is no way the Wii U CPU can simply be 3 Broadways on a single die. Am I missing something?
You're missing the core shrink that took place.

90nm to 45 nm, now core shrinking is never linear, but we do have a previous core shrink scenario in this architecture.

180 nm Gekko @ 486 MHz = Rated 4.9 Watts
90 nm Broadway @ 729 MHz = 3.9 Watts? (announced to be 20% less power hungry than Gekko despite having 50% increase in frequency)

90 nm PowerPC 750 CL @ 733 MHz = Rated for a 4.8 Watts maximum draw

What does this mean: The 3.9 Watt figure for Broadway is probably realworld average rather than full load/peak like the ones for Gekko/Broadway, peak is 4.8 Watts; peaks are often not sustainable too, so going for an average probably makes more sense. They could also have messed with the voltage making it a 1/1.1V part going at 729 MHz whereas for regular PPC750CL the 733 MHz are being fed by 1.1/1.2V; that would be a very Nintendo thing to do; undervolting the 800 MHz parts.

Anyway and because this is never linear, it means this core shrink is most definitely not spending 4 Watts per core @ 1.24 GHz, but probably never under 3 Watts had they been separated core's.


Thing with cojoined cores is that they tend to spend a little less energy.

Here's a case scenario:

45 nm Penryn Core 2 Solo 1.2-1.4 GHz - 5.5 Watts TDP
45 nm Penryn Core 2 Duo 1.2-1.6 GHz - 10 Watts TDP (not 11 Watts)

As for the increased cache's power needs I don't think we have a way to guess the difference; but it's eDRAM now, it's supposed to use up less energy at the same capacity.


7/8 Watts TDP for the whole thing seems to be about right.
 

krizzx

Junior Member
A Pentium III Coopermine @ 1 GHz amounted to 2 GFlops, so we're talking about 1.4/1.5 GFlops or so for 733 MHz Xbox Coopermine part with half the L2 cache.

I reckon Microsoft over inflated it back then though, all the way to 2.9/3 GFlops which were obviously fake.

Gekko was a little bit better in FLOPS. (Celeron had SSE though which might have helped on code/ports that used it; just like Gekko could be helped by it's "50 SIMD instructions")Probably not. 750CL/Broadway seem to be a direct evolution of Gekko, so no major toolset revision took part, that also means compatibility stays the same without any caveats; of course they changed the design a bit this time around, for stuff like the cache and 3-SMP implementation, but they seemed to be determined to keep it 100% compatible nonetheless and in not adding more logic to them; so they should perform the same.

I hope they added some new SIMD instructions, but that'll be it.You're missing the core shrink that took place.

90nm to 45 nm, now core shrinking is never linear, but we do have a previous core shrink scenario in this architecture.

180 nm Gekko @ 486 MHz = Rated 4.9 Watts
90 nm Broadway @ 729 MHz = 3.9 Watts? (announced to be 20% less power hungry than Gekko despite having 50% increase in frequency)

90 nm PowerPC 750 CL @ 733 MHz = Rated for a 4.8 Watts maximum draw

What does this mean: The 3.9 Watt figure for Broadway is probably realworld average rather than full load/peak like the ones for Gekko/Broadway, peak is 4.8 Watts; peaks are often not sustainable too, so going for an average probably makes more sense. They could also have messed with the voltage making it a 1/1.1V part going at 729 MHz whereas for regular PPC750CL the 733 MHz are being fed by 1.1/1.2V; that would be a very Nintendo thing to do; undervolting the 800 MHz parts.

Anyway and because this is never linear, it means this core shrink is most definitely not spending 4 Watts per core @ 1.24 GHz, but probably never under 3 Watts had they been separated core's.


Thing with cojoined cores is that they tend to spend a little less energy.

Here's a case scenario:

45 nm Penryn Core 2 Solo 1.2-1.4 GHz - 5.5 Watts TDP
45 nm Penryn Core 2 Duo 1.2-1.6 GHz - 10 Watts TDP (not 11 Watts)

As for the increased cache's power needs I don't think we have a way to guess the difference; but it's eDRAM now, it's supposed to use up less energy at the same capacity.


7/8 Watts TDP for the whole thing seems to be about right.

Ah, well that makes a lot more sense.

I wish there was a way to actually take the CPU and run data through it on another board or something of the like and see its RAW performance when using well known code.

This is definitely the most energy efficient processor I have ever seen if this is all accurate. I kind of wish Nintendo would have added another core to the CPU just to give that extra kick.

I remember seeing documentation that a showed that the PowerPC processor of that time returned 3 times the performance of the full scale Pentium 3. I can't find the page but it was a 200Mhz G4 vs. a 300 Mhz Pentium 3.

EDIT: Found it. http://macspeedzone.com/archive/4.0/g4vspent3signal.html

That is why I was curious. Is Gflops and asbolute measurement of the limit of overall performance capability? By my estimate, going by the old data, Gekko should have been a little over twice as strong as the Xbox1 CPU. How do the Nintendo based PPC750s stand up to the G4 performance/efficiency wise?
http://www.oocities.org/imac_driver/cpu.html

The PPC750 sure has a lot of derivatives now that I look at it. Do we know for certain which one Espresso uses?
https://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/B22567271129DE7F87256ADA007645A6/$file/PowerPC_750L_vs_750CXe_Comparison.pdf

If a single core PPC750 takes 3 instruction and returns two each cycle, then how would that compare to a hyper-threading processor core or a dual core processor that returned a single instruction for each single core's performance? I always thought it would play out the same pretty much, but I remember from a long time ago during the Core 2 Duo days that people said that being dual core gave an extreme performance boost over hyper-threading even though they are both processing 2 instructions at once. I never quite understood that.
 
This is definitely the most energy efficient processor I have ever seen if this is all accurate. I kind of wish Nintendo would have added another core to the CPU just to give that extra kick.
Well, in a sense.

IBM is bad at power gating, the tighter the CPU, and this one is tight the less of an issue it is; but if you look at an Intel Atom part idling at 0.004 Watts the PPC750 starts showing it's age.

Actual performance per Watt though, remains impressive, yes. And I suppose the improved cache should bring it up to 2.71 DMIPS/MHz which is what the large 1MB cache did for the PPC476FP so there's your performance gain, over Gekko/Broadway.
I remember seeing documentation that a showed that the PowerPC processor of that time returned 3 times the performance of the full scale Pentium 3. I can't find the page but it was a 200Mhz G4 vs. a 300 Mhz Pentium 3.

EDIT: Found it. http://macspeedzone.com/archive/4.0/g4vspent3signal.html
That's a little bit unfair.

They're pinning MMX against Altivec, Altivec is the IBM/Freescale answer to MMX, sure... but Pentium 3 had SSE which was Intel answer to later stuff like 3D Now. Still, that's something the G4 would be sure to come out on top because it had four-way single precision floating point. Something PPC750/G3 (and the Pentium 3) didn't.

Paired singles on Gekko/Broadway make it 2-way.
That is why I was curious. Is Gflops and asbolute measurement of the limit of overall performance capability? By my estimate, going by the old data, Gekko should have been a little over twice as strong as the Xbox1 CPU. How do the Nintendo based PPC750s stand up to the G4 performance/efficiency wise?
http://www.oocities.org/imac_driver/cpu.html
Not so.

Gekko had 1125 DMIPS @ 486 MHz
XCPU had 951.64 DMIPS @ 733 MHz

A 866 MHz Pentium III would make it even, at 1124.31 DMIPS, and a Broadway @ 729 MHz should be 1687.5 DMIPS, still not close to doubling despite almost being 733 MHz. ;)

It's still better, and it supported compression, which was a plus (more here).
The PPC750 sure has a lot of derivatives now that I look at it. Do we know for certain which one Espresso uses?
https://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/B22567271129DE7F87256ADA007645A6/$file/PowerPC_750L_vs_750CXe_Comparison.pdf
An evolved PPC750CL for sure.

For only Gekko and 750CL/Broadway had paired singles (perhaps CXe had them there, perhaps disabled in order to increase Gekko yields, I dunno)

The only alternative from using those IBM had would be the canceled PPC750 VX, which would have Altivec (read: four-way single precision floating point), that I suspect was never even close to production.

That leaves them with core shrinking and supercharging the cache of a PPC750CL.

Like I said, worst case scenario is keeping in line with the 256 KB L2 performance and delivering 2877.32 DMIPS, best case scenario is 3368.53 DMIPS (2.71 DMIPS/MHz) per core.
If a single core PPC750 takes 3 instruction and returns two each cycle, then how would that compare to a hyper-threading processor core or a dual core processor that returned a single instruction for each single core's performance? I always thought it would play out the same pretty much, but I remember from a long time ago during the Core 2 Duo days that people said that being dual core gave an extreme performance boost over hyper-threading even though they are both processing 2 instructions at once. I never quite understood that.
That's kinda hard to explain without getting too technical and risking stepping on my boundaries.

But it's basically because hyper-threading is using overhead that would otherwise be unused (I usually use the example of having a house with a high ceiling where not much goes above your head, leaving unused space), that's why it is a 30% performance boost at most, usually more like 10/15%. That second thread is not meant to run code with the same priority and density as the main thread, it's for secondary stuff, to cram more stuff in per cycle. It would also be useless on a short pipeline design like the Espresso/PPC750 (short pipeline eliminates the "high ceiling" I was talking about, it also makes the cpu more efficient).

A second core though... is a second core, certainly preferable to HT.
 

krizzx

Junior Member
Like I said, worst case scenario is keeping in line with the 256 KB L2 performance and delivering 2877.32 DMIPS, best case scenario is 3368.53 DMIPS (2.71 DMIPS/MHz) per core.That's kinda hard to explain without getting too technical and risking stepping on my boundaries.

But it's basically because hyper-threading is using overhead that would otherwise be unused (I usually use the example of having a house with a high ceiling where not much goes above your head, leaving unused space), that's why it is a 30% performance boost at most, usually more like 10/15%. That second thread is not meant to run code with the same priority and density as the main thread, it's for secondary stuff, to cram more stuff in per cycle. It would also be useless on a short pipeline design like the Espresso/PPC750 (short pipeline eliminates the "high ceiling" I was talking about, it also makes the cpu more efficient).

A second core though... is a second core, certainly preferable to HT.

So then what about being able to process multiple instructions per cycle vs hyper-threading?

Which produces better performance? Being able to read 3 and execute 2 instructions per CPU cycle, or using hyper threading?

Also, how does Broadway stack up to the G4? That site had a G4 clocked at 1Ghz listed as performing at 15 Gflops. I would expect Espresso to have much better performance than the G4. I wonder if the Nintendo CPUs have things like MMX and Altivec.

Last of all, how much difference would out-of-order vs in order make in all of this. I was taking that into consideration when comparing Gekko to the XCPU. That's why I said it should be around twice the performance. it wouldn't run into as many problem as the XCPU would.

That actually makes it look even more like Gekko got around twice the performance. A bandwidth of 10GB/s vs 20-25GB/z, 4 textures layers vs 8 texture layers, 4 lights vs 8 lights. Even the page lists the performance as around 2:1.

Though he does have one glaring mistake on his writeup. The GC didn't have 40 MB of 1T-SRAM. It has 24. The other 16 was auxiliary/dram. Though I didn't know the Xbox1 GPU could only use 16 MBs of RAM for the GPU. That means that the GC not only had much faster GPU RAM, it had more. Its sad that we never saw the full potential of the GC, and given that there was never a game that came even remotely close to Rogue Leader and Rebel Strike in performance on the Wii, that means the Wii's power was never even halfway put to use.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Maybe we will have to face the fact that it is a horrible , slow CPU...
I guess that's sarcasm, but in case it's not /disclaimer

It's not like Broadways and Bobcats are publicly available for anybody to test that..
 
So then what about being able to process multiple instructions per cycle vs hyper-threading?

Which produces better performance? Being able to read 3 and execute 2 instructions per CPU cycle, or using hyper threading?

Difficult to say (as always it depends on the application), but not really relevant since there are no single-issue CPUs around. Dual-issue has been around since the first Pentium (or was it Pentium Pro?).

Last of all, how much difference would out-of-order vs in order make in all of this. I was taking that into consideration when comparing Gekko to the XCPU. That's why I said it should be around twice the performance. it wouldn't run into as many problem as the XCPU would.

By XCPU you mean Xenon or the Xbox 1 CPU (XCPU is another name for Xenon)?
Xenon not being out-of-order is one of the architectures' major drawbacks. Together with its short pipeline there is no doubt Gekko/Broadway/Espresso is more efficient per cycle for most code.
Pentium Pro/II/III however were out-of-order, too.
 
You can protest all you like, but you keep popping into WiiU tech threads spouting barely understood technical terms in a vain attempt to paint it as having some kind of advantage.

As for the SPEs, talking about it as "1/3rd of a core" is pretty meaningless. Each is a fully independent CPU core with a purposefully limited instruction set and a small, but very fast space of working memory. Since you aren't supposed to use them like a conventional CPU core it isn't particularly helpful to try and make those kinds of comparisons. They are capable of both floating point and integer calculations, but they are strongly suited to 128 bit vector math. Individually, each SPE in the PS3 is capable of about 26 GFlops. That's 11 more GFlops for one SPE than all three Espresso cores combined.

And yes, in the 360 (and PS3) there is no dedicated sound hardware. Developers are free to devote as much or as little of the system's resources to audio as they want.

How many flops is the PPU core in Cell by itself? Curious cause that's more comparable to a single espresso core than the SPE's?

And why does PS4 Jaguar CPU have so many more flops than espresso? A single Jaguar core almost has as many flops as all 3 espresso cores. I realize ps4 CPU is clocked at 2ghz(heavily rumored to be).

Edit: it seems like u touched on my question a bit in a post up above. 32bit SIMD vs 128 bit, instructions per cycle. Is there anything else? Id be interested how mathematically It compares.ike what's beig multiplied that makes jaguar have much more flops per core. Are they calculated the same way?
 
How many flops is the PPU core in Cell by itself? Curious cause that's more comparable to a single espresso core than the SPE's?

Basically the same as a SPE: 25.6 GFLOPs (3.2 * 4 * 2).

And why does PS4 Jaguar CPU have so many more flops than espresso? A single Jaguar core almost has as many flops as all 3 espresso cores. I realize ps4 CPU is clocked at 2ghz(heavily rumored to be).

That's because Jaguar has 4-way SIMD operations (as opposed to 2-way).
 
The PPE in Cell is also capable of 25.6 GFlops because it also has a 128 bit Altivec unit.

And the Jaguars have 128 bit vector units too, which is twice as wide a Espresso's 64bit FPU, plus in PS4 they are expected to be clocked at least 50% faster.
 
The PPE in Cell is also capable of 25.6 GFlops because it also has a 128 bit Altivec unit.

And the Jaguars have 128 bit vector units too, which is twice as wide a Espresso's 64bit FPU, plus in PS4 they are expected to be clocked at least 50% faster.

So how would the performance of Cells PPU compare to Espresso(is this basically the same thing as asking how 1 Xenon core would compare to Espresso? Xenon cores do seem to have more flops per core than the PPU)? Are they basically a wash when you consider Espresso advantages(OoE) and more cache vs the higher flops and more instructions per cycle(due to 128bit ect ect)?
 
The PPE will perform very much like a Xenon core, yes. It really depends on the code when comparing to Espresso. In a float heavy workload, the PPE/Xenon cores would likely be dramatically faster. A branchy, integer heavy program could be close, with the PPE/Xenon's low IPC at a high clock rate plus SMT competing with Espresso's higher IPC and shorter pipeline, with Espresso coming out on top. Real applications will likely involve a mix of both kinds of code, however, so while Espresso could be faster running your gameplay loop and AI, if your game spends a lot of time waiting for the physics simulation on the FPU where Espresso is way slower that could be a big problem.
 
Top Bottom