• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Intel shows off Larrabee graphics chip for first time

CTLance

Member
jonremedy said:
I don't think half of the people in this thread know what big-O notation even is, so your post might not make much of a case to them :lol
Well in that case I just made myself look way smarter than I actually am. Awesome.
2ic2u4p.gif
 

Struct09

Member
It's awesome how far raytracing has come along over the years. When I studied it in my computer graphics course in college it was nowhere near where it is now.
 

shuyin_

Banned
I'm going to agree with all the people saying water looks bad.

cyberheater said:
I'd love to see a good quality ray tracing engine at good framerates. It should look amazing...
wat? Ray tracing rendering in real time? Is that possible now?
 

androvsky

Member
shuyin_ said:
I'm going to agree with all the people saying water looks bad.


wat? Ray tracing rendering in real time? Is that possible now?

IBM released a free raytracing demo for the Cell a couple years ago. I ran it on my PS3 in linux, it was just a Ferrari on a stand that you could rotate. It ran 720p, between 10 and 30 frames per second depending on your quality settings.

I think this is the same demo (don't have flash on this machine), except running on more systems for a higher framrate.
http://www.youtube.com/watch?v=oLte5f34ya8
 
CTLance said:
Oh god, I shudder at the thought of the x86 architecture gaining even more traction. Ugh. It's such a messy instruction set/architecture.
but it's easy to implement and easy to use, right?

....regarding the scalability:

Now correct me if I'm wrong:

"Naive" raytracing is O(m*n), just like Rasterization. Raytracing using some spiffy binary tree magic and other nonsense can come close, if not match O(m*log(n)), and degrade to O(m*n) in worst case, which constitutes considerable savings.

[Legend: m := number of pixels ; n := number of triangles]
If I get this right you neglected the reflecting stuff
I'll admit that this won't really be relevant for quite some time, I think, but SOMEONE THINK OF THE FUTURE! Once they'll have scenes with a kajillion billion triangles to display on their supermegahyperHD screens, humanity will be cursing their forefathers for not giving raytracing a chance!
I think it's time to switch to raytracing right now just to leave the rasterization behind. Just think about all the lightmaps, etc developers have to create with rasterization to get some nice effects. This doesn't get better with the next generation of consoles.
With raytracing light and stuff is near to no problem. Just place a light source and you are done. :)
I do not wish to have a curse placed on my person, post-mortem or not. So the answer is clear: Raytracing for president. ;)
I'm with you!
 
camineet said:

In the article its stating why the pc gaming market has been dwindling, well i mentioned this over a year ago in here and was just laughed at for even bringing that idea up from the typical egotistic gaffers lol but its not so far fetched now is it.

Anyways, i had this feeling the Larrabee wasnt as cracked up to be..but its not fair to judge it quite yet of course.
 
CTLance said:
Now correct me if I'm wrong:

"Naive" raytracing is O(m*n), just like Rasterization. Raytracing using some spiffy binary tree magic and other nonsense can come close, if not match O(m*log(n)), and degrade to O(m*n) in worst case, which constitutes considerable savings.

[Legend: m := number of pixels ; n := number of triangles]

You're a little off, the main error is that you completely neglected the reflected rays.
 
CTLance said:
Oh god, I shudder at the thought of the x86 architecture gaining even more traction. Ugh. It's such a messy instruction set/architecture.

LOL! So I can assume you prefer RISC over CISC. You're either kidding or there is some other widely used microprocessor instruction set out there I'm not aware of.
 

Vaporak

Member
Mr.Potato Head said:
In the article its stating why the pc gaming market has been dwindling, well i mentioned this over a year ago in here and was just laughed at for even bringing that idea up from the typical egotistic gaffers lol but its not so far fetched now is it.

Does GAF still get to laugh at you since the studies into the matter at hand say you're wrong?

On topic I'm still interested in what Larrabee, and Intel as a company, can bring to the PC gaming world. Sure, Larrabee probably won't be up to the standards of the other high end cards when it comes out, but more competition in the GPU space will be a good thing imo.
 
WOW.

This are the most civil and urbane youtube comments I've ever seen.


Also, I can't believe you guys don't think that looks good. I mean, the modeling's not super sophisticated and there's not a whole lot going on, but it's super duper crisp and the lighting is nice.
 

Nirolak

Mrgrgr
Flying_Phoenix said:
So essentially it's solely the Larrabee CPU that's doing all of the graphics so that a dedicated GPU card isn't needed?
Correct. All the graphics are done by software rendering on the CPU array. This used to be really common before graphics cards started becoming standard around 1997, and now we're finally heading back that way as computing power has grown so much that it's actually more viable again.
 
lemon__fresh said:
damnnnnn, sucks for the people writing compilers.

The heck? Orthogonal load-store instruction sets are much easier to write compilers for than CISC machines like x86. In fact, that was one of the primary motivations for them. You don't have a million complicated addressing modes to worry about, instruction selection becomes trivial, and you have heaps of general purpose registers to play with.
 

Fafalada

Fafracer forever
lemon__fresh said:
damnnnnn, sucks for the people writing compilers.
RISC term doesn't really mean what it used to anymore. Modern consoles since PS2 era have bigger and more complex instruction sets then classic "CISC" CPUs like (286-Pentium) used to have.
Compiler writting sucks these days, but it's because they are all trying to figure out ways to do auto-paralelism. Single-threaded instruction sets are more friendly then ever.
 
Fafalada said:
RISC term doesn't really mean what it used to anymore. Modern consoles since PS2 era have bigger and more complex instruction sets then classic "CISC" CPUs like (286-Pentium) used to have.
Compiler writting sucks these days, but it's because they are all trying to figure out ways to do auto-paralelism. Single-threaded instruction sets are more friendly then ever.

ahhh ic, the last RISC programming I did was on a SPARC cpu. Having "heaps" of registers was great but it was nice being able to do a lot more with only a few instructions (CISC).
 
Fafalada said:
RISC term doesn't really mean what it used to anymore. Modern consoles since PS2 era have bigger and more complex instruction sets then classic "CISC" CPUs like (286-Pentium) used to have.
Compiler writting sucks these days, but it's because they are all trying to figure out ways to do auto-paralelism. Single-threaded instruction sets are more friendly then ever.

I've heard that the difference between RISC and CISC is a closing gap.
 

SapientWolf

Trucker Sexologist
jonremedy said:
I don't think half of the people in this thread know what big-O notation even is, so your post might not make much of a case to them :lol
You can't count on that. I think we scared away most of the normal GAFfers with all the hardcore tech talk.
 

ghst

thanks for the laugh
Mr.Potato Head said:
In the article its stating why the pc gaming market has been dwindling, well i mentioned this over a year ago in here and was just laughed at for even bringing that idea up from the typical egotistic gaffers lol but its not so far fetched now is it.

Anyways, i had this feeling the Larrabee wasnt as cracked up to be..but its not fair to judge it quite yet of course.

gaf will continue to laugh at you till the day you finally flip out and beat the server to death with a chain.
 

mrWalrus

Banned
Project Offset looks easy on the eyes. So I'm not at all worried about the Larrabee's ability to make things look pretty.
Graphics are only as good as the artist making them..

Let's not bog down the thread with, 'the water looks garbage' talk. Let the engineers duke it out. :D
 
Flying_Phoenix said:
I've heard that the difference between RISC and CISC is a closing gap.

That was true, but these days I think that trend (in some domains) has started to reverse. The narrowing of the performance gap between CISC and RISC is all about transistor count. Essentially all current CISC chips (certainly all modern x86 chips) are actually RISC processors with a front end that decodes their old CISC instruction set into some custom RISC instruction set which is what actually gets executed. So, CISC processors are really RISC processors that have to pay a tax in transistors to be compatible with an old instruction set. The amount of transistors required for the decoding is essentially fixed, so as the total transistor count of CPUs increases the cost of including the decode hardware falls. This is what has resulted in the performance gap closing.

However, the situation has recently changed. With Atom and Larrabee we're actually seeing a significant reduction in the transistor count of some processors. Sure, x86-64 and POWER are still neck-and-neck in big ass server CPUs, but Intel is trying to move down into a space where the x86 tax becomes significant again. The ARM Cortex A8s and A9s kick the crap out of Atom in terms of performance/price and performance/power ratios, and the Cortex A9 destroys Atom in terms of absolute performance as well. A Larrabee equivalent using ARM cores would probably have a similar performance advantage. As we move further down the line of using many more simpler cores the RISC advantage may become increasingly important.
 

CTLance

Member
lemon__fresh said:
LOL! So I can assume you prefer RISC over CISC. You're either kidding or there is some other widely used microprocessor instruction set out there I'm not aware of.
Leave CISC/RISC out of this. :D

I'll admit that I'm an ARM fanatic (free barrelshift for life! ;), but my loathing of the x86 ISA has nothing to do with that. It has everything to do with several other factors, such as a lack of general purpose registers, a whole lot of legacy crap, and a humongous amount of irregularities/special rules that eat away on the inherent logic of the ISA.

OTOH, the x86 compilers are so incredibly sophisticated nowadays that it doesn't really matter anymore. Besides a handful of nutters (demosceners) and specialists nobody really needs to bother with the assembler side of things. Thank the deities for that.

As has been stated before, modern CPUs are kind of Frankensteinian hybrids though, so the old C/RISC differentiation doesn't really matter.
 
Jon of the Wired said:
That was true, but these days I think that trend (in some domains) has started to reverse. The narrowing of the performance gap between CISC and RISC is all about transistor count. Essentially all current CISC chips (certainly all modern x86 chips) are actually RISC processors with a front end that decodes their old CISC instruction set into some custom RISC instruction set which is what actually gets executed. So, CISC processors are really RISC processors that have to pay a tax in transistors to be compatible with an old instruction set. The amount of transistors required for the decoding is essentially fixed, so as the total transistor count of CPUs increases the cost of including the decode hardware falls. This is what has resulted in the performance gap closing.

However, the situation has recently changed. With Atom and Larrabee we're actually seeing a significant reduction in the transistor count of some processors. Sure, x86-64 and POWER are still neck-and-neck in big ass server CPUs, but Intel is trying to move down into a space where the x86 tax becomes significant again. The ARM Cortex A8s and A9s kick the crap out of Atom in terms of performance/price and performance/power ratios, and the Cortex A9 destroys Atom in terms of absolute performance as well. A Larrabee equivalent using ARM cores would probably have a similar performance advantage. As we move further down the line of using many more simpler cores the RISC advantage may become increasingly important.

So in short. Recently CISC (x86) and Power (RISC) weren't too different performance wise. However with the new epidemic of netbook and media devices CPU's it has been as big as the old days and is much likely to get bigger if netbooks and media devices take over the computer domain. And Larrabee could be a sign of things to come.

But why would intel go back down that route? Especially with Larrabee?
 
CTLance said:
Leave CISC/RISC out of this. :D

I'll admit that I'm an ARM fanatic (free barrelshift for life! ;), but my loathing of the x86 ISA has nothing to do with that. It has everything to do with several other factors, such as a lack of general purpose registers, a whole lot of legacy crap, and a humongous amount of irregularities/special rules that eat away on the inherent logic of the ISA.

OTOH, the x86 compilers are so incredibly sophisticated nowadays that it doesn't really matter anymore. Besides a handful of nutters (demosceners) and specialists nobody really needs to bother with the assembler side of things. Thank the deities for that.

As has been stated before, modern CPUs are kind of Frankensteinian hybrids though, so the old C/RISC differentiation doesn't really matter.

You kind of brought it up when you mentioned instruction sets. I only really care about these things from an ease of software development perspective and it all kinda appears to come down to personal preference in that respect.
 
Flying_Phoenix said:
So in short. Recently CISC (x86) and Power (RISC) weren't too different performance wise. However with the new epidemic of netbook and media devices CPU's it has been as big as the old days and is much likely to get bigger if netbooks and media devices take over the computer domain. And Larrabee could be a sign of things to come.

But why would intel go back down that route? Especially with Larrabee?

Intel wants x86 to be pervasive in every market segment, period. I assume they plan to win the same way they won back in the Pentium days, when the core sizes were even smaller than Atom or Larrabbee. That is, they'll just spend twice as much on R&D as every other chip maker combined. Intel has always relied on their significant lead in manufacturing process tech and crazy expensive development techniques (e.g., laying out transistors by hand instead of using higher level design tools) to close the performance gap. I imagine that's just what they'll keep trying to do.
 
Top Bottom