• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

RAGE-What we know, crumbs and morsels on id's hybrid racer/shooter(56k is RAGIN)

brain_stew said:
Increasing the graphical fidelity in terms of resolution, aa, filtering as id have discussed isn't going to put much strain on the CPU.

I think you're really underestimating just how slow and simple each Xenon core is, the Atom would be a good comparison in terms of complexity. They get their speed because there's a lot of them and they have a high clock speed but the single threaded performance per clock really is abysmal when compared to a modern Intel/AMD CPU. An E6600 really does run rings around it, even if it is a core short and already rather outdated itself.

Well due to the use of Virtual Texturing and the latency issues produced I think it may be dependent on how much work they can offload onto the gpu(to reduce CPU load). Having said that the render step is already taking up the most time in their engine.

Meh, only time will tell.
 
fizzelopeguss said:
Fuck, that was fascinating. You just know when you or i think of "parallax maps" we go hmmmm, bumpy. But carmack obviously thinks in 1's and 0's the matrix style.

I think it's just that we are hearing a programmer talk instead of a producer or PR. It's nice because he is actually capable of explaining his technology using proper terminology.

I'm excited to see if his next engine uses real-time raycast voxel primitives, now that would be something new and different.
 
lemon__fresh said:
Given that the latest shader api's/GPUs now support dynamic looping and branch prediction i'd say GPU's have equal or greater potential than the SPUs.

I read in a cell white paper, if the graphics code is structured to fully exploit, 4 SPUs on the PS3 is about equal to the shading power of all 24 RSX ps arrays. I don't know if anyone is even using the cell for pixel shading. I'm no programmer, the spus can't possibly compare to modern GPUs. I tend to believe the next PS will maintain a cell/pc derived gpu combo.
 
gofreak said:
Not sure, you'd have to point me in the direction of that. If you could actually do that - split your work up into a large number of independent units - it strikes me that this would be ideal for scaling across more cores. The real world might often be more complicated than that - i'm sure it's a very difficult thing to do with all your work - but as an ideal to strive for it seems to make sense...
I think their findings were basically that splitting up jobs to run independently wasn't efficient because any job that runs faster or slower in any given frame(s) will mean that the cores that the job was running on will just sit around idling, wasting time. They found that doing that over two cores could theoretically double your performance, but only in certain contrived situations, and the actual gain was really only something like 1.2x.

http://www.bit-tech.net/gaming/pc/2006/11/02/Multi_core_in_the_Source_Engin/1
 

Truespeed

Member
proposition said:
I think their findings were basically that splitting up jobs to run independently wasn't efficient because any job that runs faster or slower in any given frame(s) will mean that the cores that the job was running on will just sit around idling, wasting time. They found that doing that over two cores could theoretically double your performance, but only in certain contrived situations, and the actual gain was really only something like 1.2x.

http://www.bit-tech.net/gaming/pc/2006/11/02/Multi_core_in_the_Source_Engin/1

If a CPU core/thread is idling then it's the fault of the scheduler for not assigning it a job to process and also the size of their jobs. The proper way to achieve parallelism to to split your tasks into as many jobs as possible and schedule them to a core thread. The issue of whether one job finishes before another is irrelevant because the scheduler, in theory, should always be feeding jobs to the core that just finished. Cores idling is a sign of bad design because regardless of load, they should always be doing something.
 

Truespeed

Member
lemon__fresh said:
The PC side is also NOT moving towards heterogeneous CPUs. It really makes more sense to have robust identical cores, although i do see what your saying about the price point. It would be easier on the programmers in the long run though.

And the PS3 does have 6 robust identical cores. Oh, you must be referring to the PPU which people abuse by running gameplay, AI, physics, sound, etc code on. The sole purpose of the PPU was to be a job scheduler to the SPU cores. Any other monumental task it was assigned was the error of the developer. As we've heard repeatedly, the PS3 only shines when you move your systems to the SPU's. That's where the gain is and games that don't do this are easily identifiable.
 
stuburns said:
What a speaker that guy is. The only person who makes tech talk at all compelling for me.

Yep, he talks for 30 minutes without being asked a question and still manages to stay extremely compelling.
Please god, give me half of Carmack's intelligence and rethorical ability! :lol
 

Spoo

Member
Truespeed said:
And the PS3 does have 6 robust identical cores. Oh, you must be referring to the PPU which people abuse by running gameplay, AI, physics, sound, etc code on. The sole purpose of the PPU was to be a job scheduler to the SPU cores. Any other monumental task it was assigned was the error of the developer. As we've heard repeatedly, the PS3 only shines when you move your systems to the SPU's. That's where the gain is and games that don't do this are easily identifiable.

IAWTP. Also -- and I'm no PS3 programmer, but I've considered trying it out -- one of the biggest gains with these SPE's comes from SIMD (vector) instructions; you can check these out here, but the general gist is that these cores are really good at fetching and crunching lots of data simultaneously, as opposed to a more linear approach. In short, this means much more algorithmic heavy-lifting when it comes to operations that can take advantage of vector operations. I'm not a PS3 programmer, but my guess is that a lot of the benefit we're seeing comes from 1) the people who know how to manage jobs, and 2) the people who know how to convert common game algorithms to vectorized instructions [when you think of games on a 3d mathematics level, it's probably not tough to see why vector instructions can do a lot]. When Carmack talks about more sweat and tears with PS3, my assumption would be that they've had to essentially refactor to the SPE's spec, and their team had to learn how to take advantage of vectorization among other things. Since compilers aren't really *there* yet when it comes to optimizing at a vectorized instruction level, it probably takes a lot of learning, experimentation and research.

Also, if I've misinterpreted this, *real* PS3 programmers -- please correct :D
 

Fersis

It is illegal to Tag Fish in Tag Fishing Sanctuaries by law 38.36 of the GAF Wildlife Act
schennmu said:
New interview by the man himself. As always great to listen to him. He talks about tech 5, the lighting system in Rage and a lot of other interesting things:

http://www.youtube.com/watch?v=YB0JzR81SPE&feature=related
http://www.youtube.com/watch?v=JGjIZc7lytg&feature=related
http://www.youtube.com/watch?v=N_Ge-35Ld70&feature=related

The guy is a genius.
Thanks for the link man.
Every game programmer (Or graphics programmer) should listen to this guy.
I think about him like a gaming god.
 

RoboPlato

I'd be in the dick
New trailer coming today!
Geoff Keighley said:
Quakecon happens today in Texas... so expect Rage news this afternoon and a new trailer online. We'll also be airing it on GTTV tonight.
 

gofreak

GAF's Bob Woodward
proposition said:
I think their findings were basically that splitting up jobs to run independently wasn't efficient because any job that runs faster or slower in any given frame(s) will mean that the cores that the job was running on will just sit around idling, wasting time. They found that doing that over two cores could theoretically double your performance, but only in certain contrived situations, and the actual gain was really only something like 1.2x.

http://www.bit-tech.net/gaming/pc/2006/11/02/Multi_core_in_the_Source_Engin/1

Yeah, that was valve's 'approach number 1' - dedicating a specific core to a specific task.

Approach number 2 was about assigning tasks to any available core, have a task list or queue...much more like iD's approach. That way if a task ends, the core then goes looking for another job (of any kind) that needs doing, picks it up, and starts processing, so you reduce idle time. Valve ended up mixing number 1 and number 2.
 
EvilDick34 said:
Jeez, he said that the textures take up terabytes of storage, holy crap.
Terabytes, that compress into mere gigabytes, that can fit into 256MB of VRAM (on PS3).

That really puts what Carmack is doing into perspective.
 

rezuth

Member
http://www.nowgamer.com/features/501/exclusive-rage-qa

What would you say to PC gamers concerned that Rage might suffer on PC due to its multiplatform development?

For the game designers the biggest focus for each system is the controller interface. We’ve taken very deliberate steps to make each system feel right in regards to the interface and the controller. We’ve been making PC games for over 15 years, so PC fans should rest assured that Rage will feel like a true PC game, not a PC port of a console game.

In Carmack we trust?
 
J_incredible said:
The Making of RAGE - Legacy of id Trailer:
http://www.youtube.com/watch?v=qlWQFkuINLI

The megatexture / Tech 5 technology is amazing. Really showing off what next generation visuals / animation can do. Just the tip of the ice-berg probably for id Software and the Tech 5 engine.

Pretty big bump, but if you're talking RAGE your heart is in the right place!

The next vid doc after "Legacy of id" is also doing the rounds, and looks even better.
 

Salazar

Member
Nostalgia~4ever said:
idtech 5 has been surpassed by Frostbite 2.

You didn't read the Dice explanation of the shit they have to go through with FB2 (edit: FB1.5) in the "no mod tools" thread, then.

"Surpassed" ? In a general sense ? Fuck no.
 

commissar

Member
Nostalgia~4ever said:
huh what do you mean with FB1.5 stealth edit?
Probably because the frostbite 1.5 engine is what was referenced with respect to why there's no mod support for FB2 on pc.

As I recall it was due to a their tools being run across a network, as client/server with a unified source. Which would mean rewriting the whole tool setup in order to make it work on a single pc..not really feasible to do.
 
Top Bottom