Well in that case I just made myself look way smarter than I actually am. Awesome.jonremedy said:I don't think half of the people in this thread know what big-O notation even is, so your post might not make much of a case to them :lol
Well in that case I just made myself look way smarter than I actually am. Awesome.jonremedy said:I don't think half of the people in this thread know what big-O notation even is, so your post might not make much of a case to them :lol
wat? Ray tracing rendering in real time? Is that possible now?cyberheater said:I'd love to see a good quality ray tracing engine at good framerates. It should look amazing...
Read the OP--this is real-time raytracing. That's the whole point, actually.shuyin_ said:wat? Ray tracing rendering in real time? Is that possible now?
shuyin_ said:I'm going to agree with all the people saying water looks bad.
wat? Ray tracing rendering in real time? Is that possible now?
but it's easy to implement and easy to use, right?CTLance said:Oh god, I shudder at the thought of the x86 architecture gaining even more traction. Ugh. It's such a messy instruction set/architecture.
If I get this right you neglected the reflecting stuff....regarding the scalability:
Now correct me if I'm wrong:
"Naive" raytracing is O(m*n), just like Rasterization. Raytracing using some spiffy binary tree magic and other nonsense can come close, if not match O(m*log), and degrade to O(m*n) in worst case, which constitutes considerable savings.
[Legend: m := number of pixels ; n := number of triangles]
I think it's time to switch to raytracing right now just to leave the rasterization behind. Just think about all the lightmaps, etc developers have to create with rasterization to get some nice effects. This doesn't get better with the next generation of consoles.I'll admit that this won't really be relevant for quite some time, I think, but SOMEONE THINK OF THE FUTURE! Once they'll have scenes with a kajillion billion triangles to display on their supermegahyperHD screens, humanity will be cursing their forefathers for not giving raytracing a chance!
I'm with you!I do not wish to have a curse placed on my person, post-mortem or not. So the answer is clear: Raytracing for president.
camineet said:
CTLance said:Now correct me if I'm wrong:
"Naive" raytracing is O(m*n), just like Rasterization. Raytracing using some spiffy binary tree magic and other nonsense can come close, if not match O(m*log), and degrade to O(m*n) in worst case, which constitutes considerable savings.
[Legend: m := number of pixels ; n := number of triangles]
CTLance said:Oh god, I shudder at the thought of the x86 architecture gaining even more traction. Ugh. It's such a messy instruction set/architecture.
Mr.Potato Head said:In the article its stating why the pc gaming market has been dwindling, well i mentioned this over a year ago in here and was just laughed at for even bringing that idea up from the typical egotistic gaffers lol but its not so far fetched now is it.
All the consoles, both home and handheld, are RISC right now. As is the iPhone and just about every other phone in the world too.lemon__fresh said:LOL! So I can assume you prefer RISC over CISC. You're either kidding or there is some other widely used microprocessor instruction set out there I'm not aware of.
aeolist said:All the consoles, both home and handheld, are RISC right now. As is the iPhone and just about every other phone in the world too.
Edit: http://en.wikipedia.org/wiki/Reduced_instruction_set_computer#RISC_success_stories
kevm3 said:Looks like something from Myst.
Correct. All the graphics are done by software rendering on the CPU array. This used to be really common before graphics cards started becoming standard around 1997, and now we're finally heading back that way as computing power has grown so much that it's actually more viable again.Flying_Phoenix said:So essentially it's solely the Larrabee CPU that's doing all of the graphics so that a dedicated GPU card isn't needed?
lemon__fresh said:damnnnnn, sucks for the people writing compilers.
RISC term doesn't really mean what it used to anymore. Modern consoles since PS2 era have bigger and more complex instruction sets then classic "CISC" CPUs like (286-Pentium) used to have.lemon__fresh said:damnnnnn, sucks for the people writing compilers.
Fafalada said:RISC term doesn't really mean what it used to anymore. Modern consoles since PS2 era have bigger and more complex instruction sets then classic "CISC" CPUs like (286-Pentium) used to have.
Compiler writting sucks these days, but it's because they are all trying to figure out ways to do auto-paralelism. Single-threaded instruction sets are more friendly then ever.
Fafalada said:RISC term doesn't really mean what it used to anymore. Modern consoles since PS2 era have bigger and more complex instruction sets then classic "CISC" CPUs like (286-Pentium) used to have.
Compiler writting sucks these days, but it's because they are all trying to figure out ways to do auto-paralelism. Single-threaded instruction sets are more friendly then ever.
You can't count on that. I think we scared away most of the normal GAFfers with all the hardcore tech talk.jonremedy said:I don't think half of the people in this thread know what big-O notation even is, so your post might not make much of a case to them :lol
Mr.Potato Head said:In the article its stating why the pc gaming market has been dwindling, well i mentioned this over a year ago in here and was just laughed at for even bringing that idea up from the typical egotistic gaffers lol but its not so far fetched now is it.
Anyways, i had this feeling the Larrabee wasnt as cracked up to be..but its not fair to judge it quite yet of course.
Flying_Phoenix said:I've heard that the difference between RISC and CISC is a closing gap.
Jesus fuck at those particle effects!mrWalrus said:Project Offset looks easy on the eyes. So I'm not at all worried about the Larrabee's ability to make things look pretty.
Graphics are only as good as the artist making them..
Leave CISC/RISC out of this.lemon__fresh said:LOL! So I can assume you prefer RISC over CISC. You're either kidding or there is some other widely used microprocessor instruction set out there I'm not aware of.
Jon of the Wired said:That was true, but these days I think that trend (in some domains) has started to reverse. The narrowing of the performance gap between CISC and RISC is all about transistor count. Essentially all current CISC chips (certainly all modern x86 chips) are actually RISC processors with a front end that decodes their old CISC instruction set into some custom RISC instruction set which is what actually gets executed. So, CISC processors are really RISC processors that have to pay a tax in transistors to be compatible with an old instruction set. The amount of transistors required for the decoding is essentially fixed, so as the total transistor count of CPUs increases the cost of including the decode hardware falls. This is what has resulted in the performance gap closing.
However, the situation has recently changed. With Atom and Larrabee we're actually seeing a significant reduction in the transistor count of some processors. Sure, x86-64 and POWER are still neck-and-neck in big ass server CPUs, but Intel is trying to move down into a space where the x86 tax becomes significant again. The ARM Cortex A8s and A9s kick the crap out of Atom in terms of performance/price and performance/power ratios, and the Cortex A9 destroys Atom in terms of absolute performance as well. A Larrabee equivalent using ARM cores would probably have a similar performance advantage. As we move further down the line of using many more simpler cores the RISC advantage may become increasingly important.
CTLance said:Leave CISC/RISC out of this.
I'll admit that I'm an ARM fanatic (free barrelshift for life! , but my loathing of the x86 ISA has nothing to do with that. It has everything to do with several other factors, such as a lack of general purpose registers, a whole lot of legacy crap, and a humongous amount of irregularities/special rules that eat away on the inherent logic of the ISA.
OTOH, the x86 compilers are so incredibly sophisticated nowadays that it doesn't really matter anymore. Besides a handful of nutters (demosceners) and specialists nobody really needs to bother with the assembler side of things. Thank the deities for that.
As has been stated before, modern CPUs are kind of Frankensteinian hybrids though, so the old C/RISC differentiation doesn't really matter.
Flying_Phoenix said:So in short. Recently CISC (x86) and Power (RISC) weren't too different performance wise. However with the new epidemic of netbook and media devices CPU's it has been as big as the old days and is much likely to get bigger if netbooks and media devices take over the computer domain. And Larrabee could be a sign of things to come.
But why would intel go back down that route? Especially with Larrabee?