• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[Eurogamer\DF] Orbis Unmasked: what to expect from the next-gen PlayStation.

Margalis

Banned
MSAA on the 360 is basically "free" in terms of performance hit, but it's not free in terms of eDRAM usage or in terms of the actual cost of the components used to allow for the "free" MSAA. And if you use too much eDRAM you get a performance hit in terms of tiling or a quality hit with sub-HD resolutions, blah blah blah.

Nothing in life is free.

Specialized components always have the same tradeoff with more general components in that they have better price/performance for what they are designed for, but they trap you into their box. The 360 is not good for things like deferred rendering, fat buffers, etc, because those don't fit into the 360 box.

The grass is always greener on the other side. If you have a general architecture you always have the niggling feeling that more specialized components would give you a boost, if you have specialized components you always feel like you could do more with a general purpose system instead. IMO there's no right answer, it's just tradeoffs.
 

ghst

thanks for the laugh
No, I didn't create these games.

However, I'm happy that it looks like we'll finally get decent AF in games with PS4.

coding is voodoo to me, i assumed you guys all wore belts full of tools for monitoring throughput under your robes.
 

gofreak

GAF's Bob Woodward
"Completely free" MSAA/AO/lighting sounds like bullshit to me.

It sounds like the usual 'screenspace stuff uses eDRAM bandwidth so is 'free' with respect to main memory bandwidth'.

On some other posts, though, there's not going to be some general case win for Orbis. It won't be like '30fps Durango = 60fps Orbis'. With regard to bandwidth anyway I think there'll be games that do better on Durango's setup than Orbis's, and vice versa.
 

mrklaw

MrArseFace
Looks like Solid State Drives are around 600 MB/s transfer rate, quite a bit below DDR3, but may be enough for important streaming capabilities because that's about 5X as fast as a typical hard drive.

In comparison, an 8X Blu-Ray drive would be 288Mbps.


Don't see them using an SSD like that, too expensive. I hope they have mandatory HDD installs though, transfer speeds from HDD are much better than from bluray.
 

Ashes

Banned
Blanket definite statements like these often end up as egg on one's face. Based on recent (last decade) history it would make sense that ms tools should be better but the most recent example we have of development tools for a console is the Vita and it is generally accepted as being simple to develop for

Whilst that's true, it still doesn't negate that Microsoft will have better development tools. Who knows? maybe they will be par or the difference will be negligible. But colour me surprised if Sony has better development tools than Microsoft.

edit: If it is par, then maybe Sony should really get into the software making business too.
 

Ashes

Banned
On paper a 6950 outdoes a 7850 if you look at specs like teraflops etc. but would you want a 6950 or 7850?

If Microsoft has something really really special from the generation ahead, then maybe, it will compete and even suggesting that we are comparing apples to apples is misleading.
 

AzaK

Member
I said 100+ (distinction). As I said, despite some similarities (little CPU bigger GPU) the Wii u still has a hardware deficiency. And a memory one too. A 60 or 100 watt difference in the world of physics is a pretty large difference in terms of percentage.

.
Not claiming Wii U is anywhere near OD but it does matter how well those watts are used. It will be very interesting to compare I think. I imagine that a good Wii U game might look pretty comparable for the first titles from O/D for a little while but maybe at 720/30 instead of 1080/60.
 
On paper a 6950 outdoes a 7850 if you look at specs like teraflops etc. but would you want a 6950 or 7850?

If Microsoft has something really really special from the generation ahead, then maybe, it will compete and even suggesting that we are comparing apples to apples is misleading.

7850 used a brand new architecture. We're talking about systems where the difference is more like a 7850 versus a 7950. Guess which one is better there?
 

mrklaw

MrArseFace
See this is the part that gets me. How is it that Sony possibly got the jump on MS in terms of rumored specs? I understand MS is going for the set top but...

But your Durango will be tivoing the latest episode of Cake Boss* for you while you play Cod. That's priceless.



*service only available to FIOS subscribers in the US with additional Xbox live subscription
 

Ashes

Banned
Yes, I ignored your blind faith in fairy dust.

To have blind faith would be to believe that Microsoft will most certainly have the better gpu; I'm merely keeping the options open. Highlighting that specs on paper are meaningless without other architectural information.

Note, I'm not referring to 'special sauce'. And also note that I'm merely presenting an argument. Now unless you're going to resort to stating a well founded argument, we should probably call it quits now.

;)
 
I think just beefing up the GPU is better, less complex, and more efficient way than putting extra dedicated processor for those tasks.
Heard it here first folks! Junior members on message boards are better console designers than the company who has previously designed the easiest and most dev-friendly consoles to work with with documented support for software development.
 

Durante

Member
It sounds like the usual 'screenspace stuff uses eDRAM bandwidth so is 'free' with respect to main memory bandwidth'.
THat's what it sounds like, but then it's strange to bring up in a discussion of the relative FLOPs of the systems, since --though it may be free in terms of main memory bandwidth -- it certainly won't be free in terms of GPU FLOPs.
 

B.O.O.M

Member
Heard it here first folks! Junior members on message boards are better console designers than the company who has previously designed the easiest and most dev-friendly consoles to work with with documented support for software development.

Really uncalled for. I mean if we go down that road we are all on an internet forum and no more than few actually have developed games. Might as well stop discussions right then and there.

If you disagree with him then just point out why.
 
According to the article by Richard in the OP, the PS Orbis is the most powerful console based off of what they know. It has the best RAM money can buy it sounds like. If they're incorrect about that conclusion and you know different information, then please share. ; )

No, the best ram money can buy would be on a 512bit bus which would give you close to 400GB/sec memory bandwidth at the same clockspeed.

The gtx 680 doesn't have an impressive bandwidth (compared to previous gen cards and the time that has passed) since the memory bus is only 256bit.

Hd7970 for example has 260GB/sec of vram bandwidth on a 384 bit bus. so it takes less of a performance hit at higher resolutions and AA levels than the gtx 680.

Remember gtx 680 is gk 104, the real high end kepler (gk110) was never released because a higher clocked gk104 was all nvidia needed to compete with AMD (who have dropped the ball, and kicked it onto a freeway while trying to pick it back up)
That's why gtx 680 is only a 170W (would be less at original clockspeeds) card compared to last gen's gtx 580 which was a 300W card.
 

kpjolee

Member
Heard it here first folks! Junior members on message boards are better console designers than the company who has previously designed the easiest and most dev-friendly consoles to work with with documented support for software development.

Oh, I don't think I was arguing against any console designers nor developers so what is your point?
 
To have blind faith would be to believe that Microsoft will most certainly have the better gpu; I'm merely keeping the options open. Highlighting that specs on paper are meaningless without other architectural information.

Note, I'm not referring to 'special sauce'. And also note that I'm merely presenting an argument. Now unless you're going to resort to stating a well founded argument, we should probably call it quits now.

;)

No, you're just perpetuating the myth that Microsoft has magic on their side, despite all indications to the contrary.
 
Could you give me some research to back up this "twice as powerful" statistic? I'm not being facetious, either.

I'm also not really exagerating the difference. The current top of the line AMD processor often gets substantially lower performance than that i5 I posted in that build. We are talking as much as 30-50 frames per second worse performance And, as has been stated, these AMD chips are not even that powerful. They are more improved mobile varities, apparently. Here is a comparison chart showing variouis game and application performances of the best AMD processors versus current Intel i5s.

The following is a very optimistic comparison of what that rumored AMD processor versus that i5 in the build I listed would compare at since this is a top of the line non-moble AMD currently:

http://www.anandtech.com/bench/Product/434?vs=288


amdflf61.jpg


If you think "console optimization" makes up that difference, I guess that is maybe possible, but that's a hell of a lot of optimization fairy dust that is required.
Sorry if this has already been answered.

Mate, PC devs can't optimise games for 8 cores when there are millions of PCs with dual and quad core CPUs. With next-gen (assuming the rumour is correct) devs know there WILL be 8 cores, it means they can optimise the game to run on 8 threads, meaning game engine functions will probably have their own core. Something they can't do if they don't know how many cores a game will run on. Same goes for the GPU, there are that many wildly different specs of graphics cards over several generations of GPU hardware they can't optimise graphics as it has to run on ALL of the different specs.
 

Ashes

Banned
It's not Kinect... That's too obvious.

It's major architectural changes or little else in my opinion. I could be wrong but *if* they are both 7000 series based for example, and one is a 7770, and the other is a 7750 [both underclocked and/or without a couple of CUs], I doubt, such things will be overcome.


edit: Especially, when one looks like a glorified graphic card and the other looks like pc/mediacentreish type thing.
 
I really hope the PS4 has wireless n. Hate not being able to utilise the 100 mega bit connection we have here.

Hopefully in both 2.4GHz and 5GHz bands.

On the special technology side I hope Sony will have a Virtex-7 for security, picture/video mainpulation and audio.
 

CorrisD

badchoiceboobies
I really hope the PS4 has wireless n. Hate not being able to utilise the 100 mega bit connection we have here.

I see no reason why it wouldn't, the Vita has N, they probably just couldn't be bothered with the PS3, personally though I just have all my consoles wired.
 
For a console? Yes, absolutely. It's in line with a normal generational leap over the current gen.

4GB of RAM is an 8x increase over the PS3/360 memory. The 360's 512MB was an 8x increase over the Xbox's 64MB of RAM. The PS2's 32MB of RAM was a 9x increase over the PS1's 3.5 MB (2MB system RAM, 1MB VRAM, 512KB audio RAM).

So in relative terms there hasn't really been an increase in RAM for this new gen, faster yes, but when you consider for high quality visuals @1080p, they will need to set aside up to 1Gb for textures, 2.5Gb isn't all that much for large open world type games as Skyrim & BF3 in 2013, by the time 2015-2016 rolls around, with people wanting/expecting ever better looking games, 2.5Gb will become a factor.

Personally I feel 6Gb would have been the sweet spot for both these consoles for the next 5yrs.
 

Ashes

Banned
Generational leaps are pretty interesting. Some people think Moore's law is coming to an ending. What this gen also has that hasn't been faced by any previous gen is the power ceiling.
 
Generational leaps are pretty interesting. Some people think Moore's law is coming to and ending. What this gen also has that hasn't been faced by any previous gen is the power ceiling.

Moore's law has already been broken in terms of ram speed, cpu speed AND gpu speed.

Ram fell behind since ddr3 was introduced. (double or triple latency compared to ddr2 means very little actual performance increase, early ddr3 performed worse than high clocked ddr2)

Cpu speeds stopped when ivy bridge released and haswell is looking like another 10 percent increase so it's not even close to moore's law anymore. I'm not even going to dignify what AMD is doing (and I own a phenom II based pc)

Gpu increases are hopelessly behind too especially since the hd8xxx series will release 6 months later than the usual schedule and there is no sign of a date for gk110 (which should have released last year as the 6xx series).

Not only does moore's law for performance no longer apply, prices for gpus and hdds have increased to the point where you barely get more performance/dollar today than you would have in 2008-2009. So even if hardware progress still followed moore's law the consumer is seeing no benifit of it.

Duopolies will do that. (hdd market is also a duopoly now since seagate bought the samsung and hitachi hdd divisions and WD merged with toshiba)
No incentive to compete, technology stalls, consumers get squeezed.


At CES AMD respresentatives let go that the hd 8xxx series has been completed and ready for release for a while now but they are putting it off until Q2 2013 because they can still move plenty of hd7xxx cards. Again, this is what a duopoly will do. Both amd and nvidia have settled into this behavior for a while now since they know trying to outcompete eachother will lose them both a lot of profits.

Capitalism is supposed to drive progress (lol).
 

Razgreez

Member
^^

Can't view this in a bubble though. Global economic factors taken into consideration only a very brave (stupid?) company would take serious risks right now
 
Isn't Moores Law about transistor (complexity) density on a integrated circuit rather than performance? So far this has not been broken and it seems that it won't be in the near future - there might be a problem with very small structures but I guess Intel already has a plan for their Fabs.
 
Moore's law has already been broken in terms of ram speed, cpu speed AND gpu speed.

Ram fell behind since ddr3 was introduced. (double or triple latency compared to ddr2 means very little actual performance increase, early ddr3 performed worse than high clocked ddr2)

Cpu speeds stopped when ivy bridge released and haswell is looking like another 10 percent increase so it's not even close to moore's law anymore. I'm not even going to dignify what AMD is doing (and I own a phenom II based pc)

Gpu increases are hopelessly behind too especially since the hd8xxx series will release 6 months later than the usual schedule and there is no sign of a date for gk110 (which should have released last year as the 6xx series).

Not only does moore's law for performance no longer apply, prices for gpus and hdds have increased to the point where you barely get more performance/dollar today than you would have in 2008-2009. So even if hardware progress still followed moore's law the consumer is seeing no benifit of it.

Duopolies will do that. (hdd market is also a duopoly now since seagate bought the samsung and hitachi hdd divisions and WD merged with toshiba)
No incentive to compete, technology stalls, consumers get squeezed.


At CES AMD respresentatives let go that the hd 8xxx series has been completed and ready for release for a while now but they are putting it off until Q2 2013 because they can still move plenty of hd7xxx cards. Again, this is what a duopoly will do. Both amd and nvidia have settled into this behavior for a while now since they know trying to outcompete eachother will lose them both a lot of profits.

Capitalism is supposed to drive progress (lol).


Lol, your post is wrong on so many levels. RAM latencies are pretty much a myth (they really aren't going up), GPU speed increases are at about what is to be expected considering that the power usage can't increase by much anymore and your performance/dollar claim is - no offense - ridiculous.
A HD4870/1GB was about 250€ in the fourth quarter of 2008. Today HD7870/2GB is 200€.
That's like a performance increase of sth. like 150% at less power consumption, a lower price and more features. HDDs are a very, very special case, still partly influenced by the Thailand flood. See SSD prices for comparison.


You might wanna give a source to that AMD rep thing. Afaik it's just a rumour and one that honestly makes no sense. You don't say sth. like that publicly.
 

UV-6

Member
I see no reason why it wouldn't, the Vita has N, they probably just couldn't be bothered with the PS3, personally though I just have all my consoles wired.
Cool, if the Vita has it then I feel better about that.

Unfortunately I can't have mine wired due to the setup and location of the router but I still get decent speeds on a wi-fi connection. Hopefully Wireless N will make it even better though.
 
Isn't Moores Law about transistor (complexity) density on a integrated circuit rather than performance? So far this has not been broken and it seems that it won't be in the near future - there might be a problem with very small structures but I guess Intel already has a plan for their Fabs.

You are right.
But I think it's meaningless for the consumer when it doesn't translate into more performance.

Nvidia went for small dies and a relatively narrow bus , intel did the same (smaller die).
You'd expect prices to go down accordingly but nope.
The consumer only bears the consequences when costs go up (usual pity party), not when they go down.

Lol, your post is wrong on so many levels. RAM latencies are pretty much a myth (they really aren't going up), GPU speed increases are at about what is to be expected considering that the power usage can't increase by much anymore and your performance/dollar claim is - no offense - ridiculous.
A HD4870/1GB was about 250€ in the fourth quarter of 2008. Today HD7870/2GB is 200€.
That's like a performance increase of sth. like 150% at less power consumption, a lower price and more features. HDDs are a very, very special case, still partly influenced by the Thailand flood. See SSD prices for comparison.


You might wanna give a source to that AMD rep thing. Afaik it's just a rumour and one that honestly makes no sense. You don't say sth. like that publicly.
I bought my hd 4870 for 130 euros in Q1 2009. (not on a sale, no promotion, median price.
6 months ago (Q3 2012) the 7870 was still 350 euros. Nearly 3x the price for less than 3x the performance. In fact the hd4770 kept topping the performance/dollar graphs on benchmark sites that keep track of these things until the AMD price drop from a few months back. (congratulations, you can now get a bit more performance for your buck than you could 4 years ago, all hail the corporate overlords in all their benevolence).
Hence why I said barely, performance / dollar was lower before the price drops, now it has finally improved a bit from 3.5 years ago (market has spoken).

Ram latencies a myth? ok then.... whatever you say. In fact it was the much improved clockspeeds that canceled out the higher cas latency, there was no performance benifit (not in synthetic benchmarks either) for going from ddr2 to ddr3 until the lower cas high clockspeed 32/28nm sticks were out.

You have no idea what you are talking about with the power consumption comment, power consumption WAS about as high as it could get with the gtx 580/hd6970 yes, at a ridiculous 300W.
A gtx 670 is a 150W card (gtx 680 170 W), because (as I already mentioned) they are mid range sized dies branded as high end parts.
GTX 580 = 520mm^2 huge die --> 300W card pushing the limits of what you can cool in a 2 slot sized gpu.
GTX 680 = 295mm^2 midrange sized die --> 170 watt card that can be cooled on air at low rpm to not be louder than your case of PSU fans. (sauce for power consumption: http://www.guru3d.com/articles_pages/geforce_gtx_680_review,9.html )

Hell the gtx 260 had a 450mm^2 die and consumed 180W of power many many years ago, so yes... gtx 580 was at the limit of power draw, gtx 670 and 680 aren't even close.

Personally I'm relieved that the current cards have more reasonable die sizes and power consumption, nvidia and amd clocked and volted those large dies to unreasonable levels to eek some more performance out of their cards because they were stuck at 40-45nm for 3 generations)
Too bad the prices aren't appropritate... Gtx 680 should never have cost more than 250 euros.

They should still have released gk110 , now all the poor sods with 1440p monitors or 120 hz monitors end up having to deal with SLI shenanigans and driver problems because they are forced to buy 2 midrange gpus to get the performance they want.

I admit I could be wrong on the CES thing, I should have left that out since there is no real way to confirm it.
Everything else I said still stands.
 
You are right.
But I think it's meaningless for the consumer when it doesn't translate into more performance.

Nvidia went for small dies and a relatively narrow bus , intel did the same (smaller die).
You'd expect prices to go down accordingly but nope.
The consumer only bears the consequences when costs go up (usual pity party), not when they go down.


I bought my hd 4870 for 130 euros in Q1 2009. (not on a sale, no promotion, median price.
6 months ago (Q3 2012) the 7870 was still 350 euros. Nearly 3x the price for less than 3x the performance. In fact the hd4770 kept topping the performance/dollar graphs on benchmark sites that keep track of these things until the AMD price drop from a few months back. (congratulations, you can now get a bit more performance for your buck than you could 4 years ago, all hail the corporate overlords in all their benevolence).
Hence why I said barely, performance / dollar was lower before the price drops, now it has finally improved a bit from 3.5 years ago (market has spoken).


Why are you using (greatly inflated) HD7870 prices from just after launch to compare them against an extremely favorable price you got on the HD4870 9 months after it launched?
That doesn't make any sense at all. It's 200€ now.

Just btw. searched for an old article stating prices: HD4870/1G was ~180€ in February 2009 according to PCGH, so don't get me wrong, but I'd trust an article from back then over you (you might remember HD4850 or 4830 prices).
So it's 180€ versus 190€ for ~150% more performance, double the VRAM and lower power consumption. That's still a relatively low increase for a ~4 year time span, I'll grant you that.


http://www.pcgameshardware.de/Grafi...Einzelkarte-gegen-Mittelklasse-SLI-CF-677162/


Ram latencies a myth? ok then.... whatever you say. In fact it was the much improved clockspeeds that canceled out the higher cas latency, there was no performance benifit (not in synthetic benchmarks either) for going from ddr2 to ddr3 until the lower cas high clockspeed 32/28nm sticks were out.


It's pretty much a myth, yes. Imo it's still coming from the DDR1/DDR2 days, where it indeed didn't make much sense to go from DDR-400 to DDR2-533 (a bit more bandwidth, worse latencies). But the "full" generational jumps, say from DDR-400 to DDR2-800 to DDR3-1600 show no latency increase at all.


You have no idea what you are talking about with the power consumption comment, power consumption WAS about as high as it could get with the gtx 580/hd6970 yes, at a ridiculous 300W.
A gtx 670 is a 150W card (gtx 680 170 W), because (as I already mentioned) they are mid range sized dies branded as high end parts.
GTX 580 = 520mm^2 huge die --> 300W card pushing the limits of what you can cool in a 2 slot sized gpu.
GTX 680 = 295mm^2 midrange sized die --> 170 watt card that can be cooled on air at low rpm to not be louder than your case of PSU fans. (sauce for power consumption: http://www.guru3d.com/articles_pages/geforce_gtx_680_review,9.html )

Hell the gtx 260 had a 450mm^2 die and consumed 180W of power many many years ago, so yes... gtx 580 was at the limit of power draw, gtx 670 and 680 aren't even close.

Personally I'm relieved that the current cards have more reasonable die sizes and power consumption, nvidia and amd clocked and volted those large dies to unreasonable levels to eek some more performance out of their cards because they were stuck at 40-45nm for 3 generations)
Too bad the prices aren't appropritate... Gtx 680 should never have cost more than 250 euros.

They should still have released gk110 , now all the poor sods with 1440p monitors or 120 hz monitors end up having to deal with SLI shenanigans and driver problems because they are forced to buy 2 midrange gpus to get the performance they want.

I admit I could be wrong on the CES thing, I should have left that out since there is no real way to confirm it.
Everything else I said still stands.



This generation is a special situation, because Nvidia decided to - for whatever reason (but probably some kind of fundamental problem that they couldn't solve fast enough) - not release the usual high end chip in the ~500mm² range. They will do this soon with GK110, though.
AMD is pretty much in the usual chip size of ~350-400mm² for the biggest chip. Power usage is lower for AMD too compared to last generation (slightly), but that's due to this being the first 28nm generation. They still need room for better performance in the second (and maybe even the third) 28nm GPU generation. And most of that increase is gonna come from increasing power usage.

Just btw. real world power consumption for HD6970 and GTX580 was more like ~205 and ~250W respectibely. Higher figures mostly came from Furmark.

See this for example: http://ht4u.net/reviews/2011/amd_radeon_hd_6570_hd_6670_turks_im_test/index13.php
 

Razgreez

Member
They should still have released gk110 , now all the poor sods with 1440p monitors or 120 hz monitors end up having to deal with SLI shenanigans and driver problems because they are forced to buy 2 midrange gpus to get the performance they want.

The sad part is, i was one of those sods. Which is why i sold my GTX680 and searched for the most balanced perf/watt and perf/dollar aggregate card at the time. That, coincidentally, appears to be the 7870/7850 which is exactly the cards rumoured to be in these systems - in some form or another at least
 

i-Lo

Member
There is a lot of perceived "damage control" from both sides here just based on rumoured specs. Imagine what'll happen when E3 rolls over. Hopefully GAF doesn't crash.

The extra silicon on the next Xbox will be dedicated to freeing up resources for the GPU. You can expect things like MSAA, certain situations in lighting, AO, etc. to be completely free. I've heard rumors that the machine is designed to where exploiting 100% of the hardware will be very simple. Something I have not heard on Orbis. Than again, I never heard of any such Orbis special sauce either.

Sony on the other hand is using a brute force method. I have zero idea which will win, and if any one is a dev on this board or programmer, we want to hear what you have to say.

Microsoft's specs on paper suggest a smaller profile, thinner unit, and a lighter BoM. This is certain.

If true, then Sony stands to be the loser of the two by spending more money for the same performance. Then again, it's Sony.
 

Avtomat

Member
Moore's law has already been broken in terms of ram speed, cpu speed AND gpu speed.

Ram fell behind since ddr3 was introduced. (double or triple latency compared to ddr2 means very little actual performance increase, early ddr3 performed worse than high clocked ddr2)

Cpu speeds stopped when ivy bridge released and haswell is looking like another 10 percent increase so it's not even close to moore's law anymore. I'm not even going to dignify what AMD is doing (and I own a phenom II based pc)

Gpu increases are hopelessly behind too especially since the hd8xxx series will release 6 months later than the usual schedule and there is no sign of a date for gk110 (which should have released last year as the 6xx series).

Not only does moore's law for performance no longer apply, prices for gpus and hdds have increased to the point where you barely get more performance/dollar today than you would have in 2008-2009. So even if hardware progress still followed moore's law the consumer is seeing no benifit of it.

Duopolies will do that. (hdd market is also a duopoly now since seagate bought the samsung and hitachi hdd divisions and WD merged with toshiba)
No incentive to compete, technology stalls, consumers get squeezed.


At CES AMD respresentatives let go that the hd 8xxx series has been completed and ready for release for a while now but they are putting it off until Q2 2013 because they can still move plenty of hd7xxx cards. Again, this is what a duopoly will do. Both amd and nvidia have settled into this behavior for a while now since they know trying to outcompete eachother will lose them both a lot of profits.

Capitalism is supposed to drive progress (lol).


Don't confuse Intel being unwilling to increase core courses t and clock speed with Intel being g unable to do so. It makes more business sense for Intel to limit clock speed increases and core count increase.

RAM latency has probably decrease d slightly over the years too in actual time but increased in clock cycles and which is a fairly meaningless metric.

I thought GK110 was out but would only be coming out as a kesla / workstation card with no consumer variant?
 

Fezan

Member
The extra silicon on the next Xbox will be dedicated to freeing up resources for the GPU. You can expect things like MSAA, certain situations in lighting, AO, etc. to be completely free. I've heard rumors that the machine is designed to where exploiting 100% of the hardware will be very simple. Something I have not heard on Orbis. Than again, I never heard of any such Orbis special sauce either.

Sony on the other hand is using a brute force method. I have zero idea which will win, and if any one is a dev on this board or programmer, we want to hear what you have to say.

Microsoft's specs on paper suggest a smaller profile, thinner unit, and a lighter BoM. This is certain.

SO a software company is designing better hardware than THE hardware company ?
 
Top Bottom