• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

RAGE-What we know, crumbs and morsels on id's hybrid racer/shooter(56k is RAGIN)

Thunderbear

Mawio Gawaxy iz da Wheeson hee pways games
I love the look of the screenshots... So nice not to spot a single repeating texture anywhere. The shots, except the freeway one, look slightly less impressive than the Game Informer shots and previous videos.

Maybe it's just that I've gotten used to it a bit so it's not as impressive as the first time I saw it. Still looks fantastic.
 
according to the slides it seems it would have made things easier for ID if sony didnt go with a heterogeneous cpu architecture.

*hides*
 

Xdrive05

Member
Wollan said:
*pets pc*

Hell yes. I recently got a Phenom II 940. Finally made the X4 jump. Hopefully we'll start seeing PC games take full advantage of this tech ASAP.

Carmack mentioned that this implementation of megatexture is particularly hard on processing time (to decompress the data), so it sounds like a good CPU will make a significant difference with Rage.
 

Thunderbear

Mawio Gawaxy iz da Wheeson hee pways games
lemon__fresh said:
according to the slides it seems it would have made things easier for ID if sony didnt go with a heterogeneous cpu architecture.

*hides*

Multicore is the future for any platform as it seems right now. I think Sony made the right choice with the SPUs, and they will probably be able to utilize Cell for the next console with much more power and much lower cost by simply adding SPUs. Possibly even to the point of getting rid of the GPU which is the way we are heading on the PC side.

Edit: My previous post thanking Gofreak for his factual report on the Edge article ended up at the bottom of the page. It's so frustrating to me to see when journalists don't report objectively, or to see people jump on a company based on a quote taken out of its context.

To all the websites reporting on how the PS3 is struggling and didn't report how other platforms are struggling as well, it's plain bad "journalism." (Still hesitant to use the word journalism when it comes to videogame "journalists", they haven't proven themselves objective in any way. This being one good example).
 

gofreak

GAF's Bob Woodward
lemon__fresh said:
according to the slides it seems it would have made things easier for ID if sony didnt go with a heterogeneous cpu architecture.

*hides*

Would have made things easier, but by the sounds of it the 'tougher' route they took, that suits Cell better, scales better and thus likely leaves them better prepared for future hardware and many-core systems...so it's not all bad :p
 
Thanks for clearing this up, gofreak. Tells a lot about mainstream media when all we get are some confusing fractures.

Game looks amazing as always, release this mofo already! I am already dreaming of the Beyond3d comparison threads... :lol
 
Thunderbear said:
Multicore is the future for any platform as it seems right now. I think Sony made the right choice with the SPUs, and they will probably be able to utilize Cell for the next console with much more power and much lower cost by simply adding SPUs. Possibly even to the point of getting rid of the GPU which is the way we are heading on the PC side.

The PC side is also NOT moving towards heterogeneous CPUs. It really makes more sense to have robust identical cores, although i do see what your saying about the price point. It would be easier on the programmers in the long run though.
 
gofreak said:
Would have made things easier, but by the sounds of it the 'tougher' route they took, that suits Cell better, scales better and thus likely leaves them better prepared for future hardware and many-core systems...so it's not all bad :p

how does it scale better?

and how is making life tougher for the most "influential" developers a good route?
 

panda21

Member
that pdf gofreak linked describes how their job parallelism system works, the way they had to build it because of cell should scale very well to working on shitload of cores machines like the high-end GPU stuff.
 

gofreak

GAF's Bob Woodward
lemon__fresh said:
how does it scale better?

Well, that's what the pdf says.

I assume because small, independent, stateless jobs are easier to schedule and to just throw at any available core without thinking too much. That approach will work better as the number of cores goes up. You might be able to schedule larger tasks with dependencies between each other 'OK' with a smaller number of cores (i.e. as in the first approach in the pdf), but doing that with a large number would be very tricky without losing the benefits of having a larger number of cores.

If you split your work into two big jobs or threads you can run in parallel, that's going to not scale very well on more than two cores. You're gonna be idling a lot of your processors. But if you split your work into lots of little jobs that can run independently, you can keep all of your cores busy, or at least as many cores as you have jobs at a given point.

lemon__fresh said:
and how is making life tougher for the most "influential" developers a good route?

Putting in that work now gives them better results now on one system, works better on the other systems too, and prepares them to scale on future hardware better. It's harder, but it's not without its payback. Whether it was a good choice or not for iD, I don't know, I can't speak for them, but I'd reckon they're happy enough to have brought that work forward and got it over with.
 
lemon__fresh said:
The PC side is also NOT moving towards heterogeneous CPUs. It really makes more sense to have robust identical cores, although i do see what your saying about the price point. It would be easier on the programmers in the long run though.

I would actually argue this point to some tenuous extent. We could say that a PCs reliance on GPUs has created a sort of heterogeneous environment on the platform. I think it's better to look at the SPEs on Cell as having more in common with a graphics architecture than they do with a general CPUs architecture. With more and more devs asking for programmable shaders and the like, and the fact that CPU+GPU are both required is creating a heterogeneous environment. It falls apart when you consider that PS3 also has a GPU, but as I recall I think they still meant the SPEs for graphics related stuff.
 

Thunderbear

Mawio Gawaxy iz da Wheeson hee pways games
lemon__fresh said:
The PC side is also NOT moving towards heterogeneous CPUs. It really makes more sense to have robust identical cores, although i do see what your saying about the price point. It would be easier on the programmers in the long run though.

I don't know why you are disagreeing with me. We don't have to argue, we could just, you know, share and discuss information. It's not a battle.

I'll try to dig up sources but everything I've read is that the PC is going more cores, and in the next 5 years there's going to be very little reason to keep the CPU and GPU seperately. The SPUs on the Cell are already being used to render geometry, which a lot of people seem to forgo when they say "if you have a 7900GT you have the same power as a PS3".

Multi-core is still fairly new relatively speaking, and the more cores there are the more problem developers have had a problem adapting to it. But I didn't think anyone would disagree that all the hardware manufacturers are heading towards the multicore structure and getting rid of the GPU so to speak (well, rather with the amount of cores the future processors would have, it would make a GPU more and more redundant).
 
Thunderbear said:
I don't know why you are disagreeing with me. We don't have to argue, we could just, you know, share and discuss information. It's not a battle.

I'll try to dig up sources but everything I've read is that the PC is going more cores, and in the next 5 years there's going to be very little reason to keep the CPU and GPU seperately. The SPUs on the Cell are already being used to render geometry, which a lot of people seem to forgo when they say "if you have a 7900GT you have the same power as a PS3".

Multi-core is still fairly new relatively speaking, and the more cores there are the more problem developers have had a problem adapting to it. But I didn't think anyone would disagree that all the hardware manufacturers are heading towards the multicore structure and getting rid of the GPU so to speak (well, rather with the amount of cores the future processors would have, it would make a GPU more and more redundant).

Exactly. A lot of post-processing and other graphics related work is being shifted to Cell now, so the SPEs are picking up work that was dedicated to the GPU solely, and I think the SPEs also add the sorts of performance you'd get from a dedicated physics chip. The SPEs should be seen as more of an extension of the GPU and CPU rather than solely as assymetric CPU cores.
 
AbortedWalrusFetus said:
I would actually argue this point to some tenuous extent. We could say that a PCs reliance on GPUs has created a sort of heterogeneous environment on the platform. I think it's better to look at the SPEs on Cell as having more in common with a graphics architecture than they do with a general CPUs architecture. With more and more devs asking for programmable shaders and the like, and the fact that CPU+GPU are both required is creating a heterogeneous environment. It falls apart when you consider that PS3 also has a GPU, but as I recall I think they still meant the SPEs for graphics related stuff.

Actually id say current GPU's are much more robust than the SPEs especially when you look at the current shader libraries. Given the rise in GPU processing power and functionality it looks like the PC is approaching a more unified homogeneous architecture.

It also appears that you guys are beginning to agree with me about the homogeneous approach being the future.
 

gofreak

GAF's Bob Woodward
I don't really get why Carmack was so negative on Cell given what appears to be discussed in that PDF (always hard to tell from slides alone...would be nice if we had audio or video from that lecture).

I mean, support for CUDA/Larabee et al is part of the brief for idTech 5.

And Cell, per the pdf, 'forced' the job system rewrite that they're 'cautiously optmistic' will fit well with those other many-core architectures (with more work to be done to suit the specifics of those frameworks).

If that does turn out to be the case, their Cell related work might afterall have been useful for more than just for PS3.

That said, Carmack has been a little bit less stinging with his criticism of Cell lately, so maybe this is why.
 
gofreak said:
I don't really get why Carmack was so negative on Cell given what appears to be discussed in that PDF (always hard to tell from slides alone...would be nice if we had audio or video from that lecture).

because they had to rewrite the engine in order to make it viable for the cell.
 

gofreak

GAF's Bob Woodward
lemon__fresh said:
because they had to rewrite the engine in order to make it viable for the cell.

Yeah, but it may not be quite the 'special case' Carmack suggested initially. One of his criticisms earlier was that all this work you had to put into Cell would only be of use on Cell. But based on the PDF that may not have turned out to be the case, there may well be some non-PS3 related benefits to fall out of that work too.
 
Thunderbear said:
But I didn't think anyone would disagree that all the hardware manufacturers are heading towards the multicore structure and getting rid of the GPU so to speak (well, rather with the amount of cores the future processors would have, it would make a GPU more and more redundant).

Agreed

As for the Cell in particular I think the problems are not with the amount of cores but with what the cores are capable of handling. Like I said previously today's GPUs are a dream to program on when compared with the SPE's, maybe sony just needs to make better tools
 

Wollan

Member
lemon__fresh said:
because they had to rewrite the engine in order to make it viable for the cell.
And in return they have come in early with work they would otherwise delayed until their next engine if not for that chip, they're probably a step ahead of most multiplatform middleware devs. Now they take more efficient use of Cell, Larrabee..etc.
 
gofreak said:
Yeah, but it may not be quite the 'special case' Carmack suggested initially. One of his criticisms earlier was that all this work you had to put into Cell would only be of use on Cell. But based on the PDF that may not have turned out to be the case, there may well be some non-PS3 related benefits to fall out of that work too.

True, but will all the concessions and Cell-centric design decisions really make a difference in an architecture similar to the Cell but with all processors being able to execute regular CPU instructions? I assume they always wanted to take advantage of multicore technology.
 

Thunderbear

Mawio Gawaxy iz da Wheeson hee pways games
lemon__fresh said:
Agreed

As for the Cell in particular I think the problems are not with the amount of cores but with what the cores are capable of handling. Like I said previously today's GPUs are a dream to program on when compared with the SPE's, maybe sony just needs to make better tools

Yeah, I think the Cell is an early step which is why there's been such a headache for a lot of programmers to adapt. Especially ones needing to write an engine that works on multiple platforms. We'll see if it pays off, but I think the Cell definitely wasn't a one generation solution but rather an attempt at thinking ahead.

And I agree the tools need to mature a lot, and that the SPUs and their implementation are in their adolecense (so not by any means all powerful or perfect).
 
Thunderbear said:
Yeah, I think the Cell is an early step which is why there's been such a headache for a lot of programmers to adapt. Especially ones needing to write an engine that works on multiple platforms. We'll see if it pays off, but I think the Cell definitely wasn't a one generation solution but rather an attempt at thinking ahead.

And I agree the tools need to mature a lot, and that the SPUs and their implementation are in their adolecense (so not by any means all powerful or perfect).

The cell came out at a time when programmers were just getting the hang of multithreading and multicore software design. having to do multicore programming on CPUs that are even lower on the totem poll than some GPUs is not easy. I bet a lot of developers are looking forward to sony's NEXT console.
 
I doubt that a lot of the concessions they made for the cell will really matter when the next line of processors arrive. Since I assume they would have ended up with something similar to the "job" system in order to deal with multicore synchronization issues.
 

panda21

Member
lemon__fresh said:
True, but will all the concessions and Cell-centric design decisions really make a difference in an architecture similar to the Cell but with all processors being able to execute regular CPU instructions? I assume they always wanted to take advantage of multicore technology.

i'm not entirely sure what you mean but the SPUs on the cell can already execute 'regular cpu instructions'.

the GPU approach of nvidia is much more limited in what you can run on it

larrabee (and cell in a sense) is capable of running a full OS on its cores, but they are specialised cores for dealing with high throughput floating point operations, just like the cell spus.

in a homogenous approach you would need to combine very general purpose processors with a lot of floating point power and special rendering oriented functions to be able to get rid of the gpu. the way to do that imo is what intel have done with larrabee where you have many many general purpose cores that can also do a lot of parallel floating point operations and have supporting texture calculation stuff. such an architecture is pretty similar to the cell and its spus, but on a larger scale, and potentially makes the ppe part redundant. but it would be a logical next step in what id have done with cell.
 

Gorgon

Member
Thunderbear said:
Multicore is the future for any platform as it seems right now. I think Sony made the right choice with the SPUs, and they will probably be able to utilize Cell for the next console with much more power and much lower cost by simply adding SPUs.

They need more PPUs also for more general purpose code. The SPUs are not idealy suited for running every type of code, not to mention that the PPU has to prepare work for the SPUs. Putting only one PPE again and just adding SPEs would lead to a very uneven CPU, IMHO.

I expect 3-4 PPEs with a bump in the number of SPEs in the next PS, especially because that would still be cheaper than the R&D was for the first Cell on the PS3. I expect them to ditch XDR memory too and go for GDDR memory instead (in a unified pool).
 

Wollan

Member
Gorgon said:
I expect them to ditch XDR memory too and go for unified GDDR memory instead.
This they will do. I'm betting their architectures are looking back at that and going 'why?'. The talk was that it was a leftover from when Kuturagi wanted the PS3 to consist of only Cells (two of them) and so the GPU and it's ram was added on later. The XDR is very expensive I believe with not too much of a speed advantage (and a quite 'unnecessary' complexity for devs when you could just have one big pool of memory instead).
 

gofreak

GAF's Bob Woodward
lemon__fresh said:
True, but will all the concessions and Cell-centric design decisions really make a difference in an architecture similar to the Cell but with all processors being able to execute regular CPU instructions? I assume they always wanted to take advantage of multicore technology.

The short answer is yes.

Job code will be tailored to each platform, no doubt. So on that level you're not going to see too much reusability of implementations for each platform. And on that level I do understand the gritting of teeth on Cell vs other platforms (certainly vs the other 'big core' systems).

But the point was that Cell 'forced' an architectural decision - i.e. to go with a pool of small independent jobs - that as it turns out might apply well to other many-core architectures going forward too.

I'm just pointing out that if that turns out to be the case, it's a little different to what Carmack foresaw those years ago when he had his 'no good will come of this' attitude to the hacking-things-into-small-pieces approach that Cell embodied.
 
panda21 said:
i'm not entirely sure what you mean but the SPUs on the cell can already execute 'regular cpu instructions'.

the GPU approach of nvidia is much more limited in what you can run on it

larrabee (and cell in a sense) is capable of running a full OS on its cores, but they are specialised cores for dealing with high throughput floating point operations, just like the cell spus.

in a homogenous approach you would need to combine very general purpose processors with a lot of floating point power and special rendering oriented functions to be able to get rid of the gpu. the way to do that imo is what intel have done with larrabee where you have many many general purpose cores that can also do a lot of parallel floating point operations and have supporting texture calculation stuff. such an architecture is pretty similar to the cell and its spus, but on a larger scale, and potentially makes the ppe part redundant. but it would be a logical next step in what id have done with cell.

My bad, im mostly referring to the memory management capabilities of the SPUs.
 

panda21

Member
gofreak said:
The short answer is yes.

Job code will be tailored to each platform, no doubt. So on that level you're not going to see too much reusability of implementations for each platform. And on that level I do understand the gritting of teeth on Cell vs other platforms (certainly vs the other 'big core' systems).

But the point was that Cell 'forced' an architectural decision - i.e. to go with a pool of small independent jobs - that as it turns out might apply well to other many-core architectures going forward too.

I'm just pointing out that if that turns out to be the case, it's a little different to what Carmack foresaw those years ago when he had his 'no good will come of this' attitude to the hacking-things-into-small-pieces approach that Cell embodied.

did it say anything about how they make the job code? i suppose if they make it independent that limits it enough to be able to run on fairly simple GPU processors with CUDA or OpenCL, but then when it comes to larrabee that might end up being wasted. i guess ideally they would make it scale to the platform without having to rewrite everything.

although i suppose tech 5 might not make it onto those anyway depending on how soon all this stuff becomes mainstream.
 
panda21 said:
did it say anything about how they make the job code? i suppose if they make it independent that limits it enough to be able to run on fairly simple GPU processors with CUDA or OpenCL, but then when it comes to larrabee that might end up being wasted. i guess ideally they would make it scale to the platform without having to rewrite everything.

although i suppose tech 5 might not make it onto those anyway depending on how soon all this stuff becomes mainstream.

according to the slides it seems that the goal is to make them independent.
 
lemon__fresh said:
Texture trickery? By the looks of things you should also make sure you have a quad core as well.

If its a 60hz game on consoles then a bog standard Core2 is going to manage just fine, and it even looks like you might be able to let your GPU help you CPU out if that's where your weakness lies. It should have really nice optimisation for quads but it doesn't look like its going to be a requirement either.
 

gofreak

GAF's Bob Woodward
panda21 said:
did it say anything about how they make the job code? i suppose if they make it independent that limits it enough to be able to run on fairly simple GPU processors with CUDA or OpenCL, but then when it comes to larrabee that might end up being wasted. i guess ideally they would make it scale to the platform without having to rewrite everything.

although i suppose tech 5 might not make it onto those anyway depending on how soon all this stuff becomes mainstream.

Jobs are supposed to be independent and stateless according to the pdf. I don't know if that's an absolute-ism they're able to stick to always. If you can assume that it does make some things simpler.

But jobs still need to do some communication, like when they're finished and such...so thread communication and collaboration support on a given platform wouldn't go completely unused. But I think the idea is that a given job should be able to complete without consulting anything else. And if you can make that happen I think it's a good thing even on platforms that ease the pain of inter-thread communication. That stuff is nice to have if you must have threads communicating with each other, but if you can get away with it, it might be better to avoid that communication altogether and just let things run on their own...otherwise you might risk having threads waiting for something else.

That is probably easier said than done, though, getting things to the point where EVERYTHING, every job, is independent...so they may bend that rule here and there.
 
brain_stew said:
If its a 60hz game on consoles then a bog standard Core2 is going to manage just fine, and it even looks like you might be able to let your GPU help you CPU out if that's where your weakness lies. It should have really nice optimisation for quads but it doesn't look like its going to be a requirement either.

Did you read the slides?
 

gofreak

GAF's Bob Woodward
lemon__fresh said:
Did you read the slides?

I think for Rage (as in the game itself) a Core2 will be fine. That the engine can scale to more cores doesn't mean it's necessary to have many cores to hit 60hz in a specific game.

Maybe future idTech5 games might require more on the CPU side, but I don't think Rage will tax things too much.
 
lemon__fresh said:
Did you read the slides?

Yes I did (though not in depth I admit). The point is though that Xenon is a very low end CPU, if the engine can run at 60hz on Xenon it can run 60hz on a middling Core 2 as well.


gofreak said:
I think for Rage (as in the game itself) a Core2 will be fine. That the engine can scale to more cores doesn't mean it's necessary to have many cores to hit 60hz in a specific game.

Maybe future idTech5 games might require more on the CPU side, but I don't think Rage will tax things too much.

Yeah, basically that's the point. The engine will make excellent and hugely efficient use of a quad, but RAGE itself just won't demand that level of performance but it stands them in great stead going forward. The CPU side of their engine is ready and waiting to scale to the next generation of CPUs so if developers want to use it to create games that really harness that technology then they're free to do so.

Doom 4 is meant to be a 30hz console game, so perhaps users with a middling Core 2 Duo are going to be CPU limited in that game and require something more high end to get 60fps gameplay on PC. Ofcourse by that point quadcores will be the baseline at retail and very prevalent, and the GPGPU part of the engine should be kicking into full gear, so it won't be much of an issue as it may have been with RAGE.
 
panda21 said:
i'm not entirely sure what you mean but the SPUs on the cell can already execute 'regular cpu instructions'.

the GPU approach of nvidia is much more limited in what you can run on it

After looking through the blue-steel api source code to bone up on my cell programming knowledge it really is more work to pass data to the spu's and avoid branch prediction situations. Given that the latest shader api's/GPUs now support dynamic looping and branch prediction i'd say GPU's have equal or greater potential than the SPUs.
 
Since it seems a lot of the PC crowd want to be able to run the game at max and judging by the focus on multiple cores it wouldn't surprise me if you needed more than 2 cores to get maximum performance out of the game. Saying it runs on the 360 which still has 3 cores (even tho they are slow) doesn't mean much because im sure the graphic fidelity will probably be comparable to medium settings on the PC.
 
gofreak said:
The short answer is yes.

Job code will be tailored to each platform, no doubt. So on that level you're not going to see too much reusability of implementations for each platform. And on that level I do understand the gritting of teeth on Cell vs other platforms (certainly vs the other 'big core' systems).

But the point was that Cell 'forced' an architectural decision - i.e. to go with a pool of small independent jobs - that as it turns out might apply well to other many-core architectures going forward too.

I'm just pointing out that if that turns out to be the case, it's a little different to what Carmack foresaw those years ago when he had his 'no good will come of this' attitude to the hacking-things-into-small-pieces approach that Cell embodied.
Didn't Valve say that the 'running independent jobs on each SPU/core' is actually a horribly inefficient way of utilising a multi-core CPU in their presentation?
 
proposition said:
Didn't Valve say that the 'running independent jobs on each SPU/core' is actually a horribly inefficient way of utilising a multi-core CPU in their presentation?

Ha, interesting. Do you have a link to their presentation?
 

gofreak

GAF's Bob Woodward
lemon__fresh said:
After looking through the blue-steel api source code to bone up on my cell programming knowledge it really is more work to pass data to the spu's and avoid branch prediction situations. Given that the latest shader api's/GPUs now support dynamic looping and branch prediction i'd say GPU's have equal or greater potential than the SPUs.

It's horses for courses. You're gonna find some jobs easier to fit on one platform, others easier on another. Slide 33 seems to have a rough first look at some of the opportunities and challenges they expect on a gpgpu code-path.

It's kind of moot anyway in the context of the above point...job code on each platform is going to look different and (hopefully) be well tailored to the strengths and weakness of their target processors.

The point wasn't that that stuff was somehow now more leverageable twixt Cell and other platforms, but that Cell forced a higher level architectural decision that may bring benefit beyond its own shores, which would be a nice side effect, a lot nicer than if that work only paid dividends on Cell. Job code work is still going to be Cell specific (and Intel specific and Xenon specific and Larrabee specific etc.), I didn't intend to suggest that work at that level would readily translate across platforms (though work on that level on some platforms may help inform the same work on others).

proposition said:
Didn't Valve say that the 'running independent jobs on each SPU/core' is actually a horribly inefficient way of utilising a multi-core CPU in their presentation?

Not sure, you'd have to point me in the direction of that. If you could actually do that - split your work up into a large number of independent units - it strikes me that this would be ideal for scaling across more cores. The real world might often be more complicated than that - i'm sure it's a very difficult thing to do with all your work - but as an ideal to strive for it seems to make sense...
 

gofreak

GAF's Bob Woodward
Is this what you're referring to proposition?

The programmers at Valve considered three different models to solve their problem. The first was called "coarse threading" and was the easiest to implement. Many companies are already using coarse threading to improve their games for multiple core systems. The idea is to put whole subsystems on separate cores; for example, graphics rendering on one, AI on another, sound on a third, and so on. The problem with this approach is that some subsystems are less demanding on CPU time than others. Giving sound, for example, a whole core to itself would often leave up to 80 percent of that core sitting unused.

That's not the model being talked about in the iD pdf. That's actually more akin to the first approach they discarded before refactoring to the 'small jobs' model.

Valve uses a mix of the above and the 'small jobs' model where its easy to split things into a larger number of finer tasks.
 

peakish

Member
The polish magazine CD-Action has interviewed Carmack about Rage / Tech 5 and put the thirty minutes of it on youtube:

http://www.cdaction.pl/news-7673/john-carmack-opowiada-o-rage---pol-godziny-goracego-materialu-prosto-ze-studia-id-software.html

I think it's new, at least, haven't seen it before. Haven't seen all of it yet.

Edit: Linked to their website with the interview instead to give them the hits.

Edit 2: I'll try to summarize some things, but there's not much new in it. Still interesting because Carmack just looks like he loves what he does, he's so excited :D

They are baking the light into the MT because they didn't really see any benefits gameplay or graphics wise for Rage in having it fully dynamic. He says letting an artist fully optimize the look of levels in fixed outdoor lightning will look better than having dynamic sunlight, which at some time at day will make some things look bad. They learned this from Splash Damages work on Enemy Territory. In the end, it runs way faster. Personally, I hope Doom 4 uses less prebaked if possible, I loved the lightning in Doom 3.

He says they want to stay with Id Tech 5 for a while because graphics/game technology isn't moving as fast as it was in the nineties. He also says that he's looking forward to the MEGA horsepower that the next console generation will bring when it does so, that Id Tech 5-like tech could create WoW-size worlds.

They have some plans for the future: They want to be with them at launch with their tech instead of arriving in the middle as they are now. As I understand it he wants to get Id Tech 6 ready for that time, and Tech 6 is supposed to be the unlimited geometry stuff they are researching right now (he is very excited about returning to that research soon, they are/want to be prototyping right now).

If they can't get Tech 6 out for the launch he does talk about a backup plan, which is a render that works together with the Tech 5 render to display on the next consoles.

On mod support: He doesn't think there will be full support as it's too hard to create good mods these days, but there'll eventually be "some code" thrown out, but he thinks that'll mainly be good for research. It's been said earlier that mod support isn't a priority for them with Tech 5 with similar reasons given, I'm still sad about it :/
 
Fuck, that was fascinating. You just know when you or i think of "parallax maps" we go hmmmm, bumpy. But carmack obviously thinks in 1's and 0's the matrix style.
 

dejan

Member
peakish said:
The polish magazine CD-Action has interviewed Carmack about Rage / Tech 5 and put the thirty minutes of it on youtube:

http://www.cdaction.pl/news-7673/john-carmack-opowiada-o-rage---pol-godziny-goracego-materialu-prosto-ze-studia-id-software.html

I think it's new, at least, haven't seen it before. Haven't seen all of it yet.

Edit: Linked to their website with the interview instead to give them the hits.

Thanks, that was a great. Carmack is basically talking for almost half an hour about several topics (id tech 5/6, Rage development, Doom 4 and other stuff) without someone even asking a question or interfering otherwise (there were some clear cuts in the third video tho).
 
lemon__fresh said:
Since it seems a lot of the PC crowd want to be able to run the game at max and judging by the focus on multiple cores it wouldn't surprise me if you needed more than 2 cores to get maximum performance out of the game. Saying it runs on the 360 which still has 3 cores (even tho they are slow) doesn't mean much because im sure the graphic fidelity will probably be comparable to medium settings on the PC.

Increasing the graphical fidelity in terms of resolution, aa, filtering as id have discussed isn't going to put much strain on the CPU.

I think you're really underestimating just how slow and simple each Xenon core is, the Atom would be a good comparison in terms of complexity. They get their speed because there's a lot of them and they have a high clock speed but the single threaded performance per clock really is abysmal when compared to a modern Intel/AMD CPU. An E6600 really does run rings around it, even if it is a core short and already rather outdated itself.
 
gofreak said:
It's horses for courses. You're gonna find some jobs easier to fit on one platform, others easier on another. Slide 33 seems to have a rough first look at some of the opportunities and challenges they expect on a gpgpu code-path.

It's kind of moot anyway in the context of the above point...job code on each platform is going to look different and (hopefully) be well tailored to the strengths and weakness of their target processors.

The point wasn't that that stuff was somehow now more leverageable twixt Cell and other platforms, but that Cell forced a higher level architectural decision that may bring benefit beyond its own shores, which would be a nice side effect, a lot nicer than if that work only paid dividends on Cell. Job code work is still going to be Cell specific (and Intel specific and Xenon specific and Larrabee specific etc.), I didn't intend to suggest that work at that level would readily translate across platforms (though work on that level on some platforms may help inform the same work on others).

The point I was trying to make was on the overall difficulty of programming for the spu's vs gpu's. It is also obviously going to be more time consuming to program spu's/ppu when compared to standard multicore cpu's.

Actually Carmack made a similar point in regards to Cell programming in the CDA videos.
 
Top Bottom