XtremeXpider
Member
Yep. We're going from AMD to AMD.
No just because nVidia->AMD, it is also because Microsoft pay to nVidia for each emulated game.
Yep. We're going from AMD to AMD.
No just because nVidia->AMD, it is also because Microsoft pay to nVidia for each emulated game.
No just because nVidia->AMD, it is also because Microsoft pay to nVidia for each emulated game.
I just watched e3 demo of watch dogs. Gameplay was nice but graphics wise I dont see anything next gen. Background people look like gta models, even some cars aren't that detailed. GoW3 has better graphics then that imo. I'm not sure what you guys are on about unless here's a more recent trailer.
But I just don't see Microsoft incorporating that kind of thing into the 720. Microsoft hasn't cared about original Xbox emulation since 2007...more than 5 years ago.
I just watched e3 demo of watch dogs. Gameplay was nice but graphics wise I dont see anything next gen. Background people look like gta models, even some cars aren't that detailed. GoW3 has better graphics then that imo. I'm not sure what you guys are on about unless here's a more recent trailer.
This is obviously a joke post.
The raindrops reflected light from muzzle flashes and the cloth simulation was really good. Keep in mind it was an open world game and it still looked better than mostgames this gen and it moved from indoors to outdoors seamlessly.
I just watched e3 demo of watch dogs. Gameplay was nice but graphics wise I dont see anything next gen. Background people look like gta models, even some cars aren't that detailed. GoW3 has better graphics then that imo. I'm not sure what you guys are on about unless here's a more recent trailer.
All I'm saying is I'd rather they allocate too much and later be able to reallocate it to devs than for them to allocate too little and have to really squeeze every last KB out of it like they have with the current dash.
I swear it isn't. The character models did not impress me. The physics seem better and more organic. Rain is good but I'm sure other games have done that. Is there a better trailer to show off watch dogs? I'll watch star wars soon to see if that's actually next gen.
This is obviously a joke post.
I didn't know, current gen consoles could do open world games with Bokeh DOF ingame...
If I we actually get visuals like this in the final game without the typical Ubi smoke and mirrors I'll be ecstatic.
Riiiight
Riiiight
Um. Both are going to get an IQ drop compared to their i7+gtx680-based PC counterparts. If that's what you mean by baseline.
I think the Watch_Dogs explosion .gif may be the first game .gif I've been unable to stop watching since the KZ2 days.
There was also no gap between cutscene models and in-game models. IQ was also through the roof. Look at the video, then find the least compressed screens you can find, and try to reconcile the two.
Watch dogs is the most visually impressive video game I've ever seen.
Night and day difference.
Loading times mean nothing to me...for all I know they ran it on a high end pc. Is there a uncompressed video file I can download? I know youtube makes videos worse.
Like I said, gameplay is great and if it's a big open world game like Gta then maybe I should be more understanding as having good graphics in open world is very hard. I was just looking at it from a graphics perspective only. Samaritan Demo smokes this graphics wiise easy for me. Same with all the other next gen demos but then again they are not open world...
Loading times mean nothing to me...for all I know they ran it on a high end pc. Is there a uncompressed video file I can download? I know youtube makes videos worse.
Like I said, gameplay is great and if it's a big open world game like Gta then maybe I should be more understanding as having good graphics in open world is very hard. I was just looking at it from a graphics perspective only. Samaritan Demo smokes this graphics wiise easy for me. Same with all the other next gen demos but then again they are not open world...
You really need to see how current gen consoles are choking with visually ambitious games like Far Cry 3 on consoles... To see that the version of Watch Dogs that you are looking at on screen is not even possible on consoles unless you want a cinematic 5fps.
It's possible with MASSIVE cutbacks though.
They need to work on the eyes for 1313. Something about them seem lifeless. Game is still in development so I'm sure it will improve.
As for watch dogs, I'll wait to see how big the environment is, the amount of little things you can do and interact with. How AI and physics turn out for the final product. Some high res photos of sleeping dogs from high res pc thread are comparable to be honest but then again it is pc and not console for those pics.
So these are the only next gen games unofficially announced I take it. Right now I'm worried about both consoles being powerful and a bit future proof. If final hardware turns out good then I'm not even worried about software cause I know devs will deliver beast games.
In regards to the eyes in 1313, those character models were placeholder to avoid showing certain details of the game. The 1313 stories in the expanded universe are the origins of Boba Fett so I think they wanted to avoid revealing that detail too early.
Grand theft Auto V vs Watch Dogs
And Watch Dogs was shown last year...remember when we first saw a next gen title for the Ps3/360?
It was Dark Sector I think, in 2005!
One year later Gears of War came out.
God that Dark Sector trailer looks dreadful by today's standards haha. Can't believe we're still in the same generation..
If we can expect Watch Dog graphics next gen, I will be very disappointed.
If it´s good enough for you, nice for you. But it´s not good enough for me. And please stop with the "consoles use super duper parts that are much much better" fairy tale.
So, like i said, 1.3 - 1.5TF GPU would be shit after eight years.
It would 5X Xenos in raw terms, PLUS likely fantastically more efficient by being so much more modern. So I think 8X in real terms at minimum.
Which might be more impressive on a 5 year cycle rather than an 8 year one, but still is not bad and qualifies as a true generational leap imo.
And ERP (programmer) on B3D made a great post, basically about how the memory subsystem is likely to be more of a factor than shader ALU's. He stated at a guess 40% of shader resources go unused on PC code.
So basically imo it means theoretically you could take a 3 teraflop PC GPU, and it might be equivalent to a 1.5 TF console GPU fully utilized.
It would 5X Xenos in raw terms, PLUS likely fantastically more efficient by being so much more modern. So I think 8X in real terms at minimum.
Which might be more impressive on a 5 year cycle rather than an 8 year one, but still is not bad and qualifies as a true generational leap imo.
And ERP (programmer) on B3D made a great post, basically about how the memory subsystem is likely to be more of a factor than shader ALU's. He stated at a guess 40% of shader resources go unused on PC code.
So basically imo it means theoretically you could take a 3 teraflop PC GPU, and it might be equivalent to a 1.5 TF console GPU fully utilized.
Originally Posted by tunafish View Post
Yes, but there is no hope in hell anyone has ever gotten anywhere near that on a real load.
The Jaguar cores are absurdly more efficient. When there was a lot of talk about jaguar cores possibly being in the next console, I went and bought myself a bobcat minilaptop to practise on the cpu. The more code I write for it, the more impressed I am of it.
This thing is crazy efficient. Without any optimization beyond what is done by the compilers, it gets near 1 IPC. Add the same level of effort we put into the consoles last time, and we are talking about something like 1.7 IPC. And that's x86 instructions that have memory read operands baked into them.
So while a xenon typically ran at something like 20% of it's capability (two threads at 0.2 IPC for a 2-wide core), this thing gets to 85% capability with one thread.
By these numbers a Bobcat thread would be something like 4 times faster than a Xenon thread. And that's for raw flops, which are the strongest point by far of the Xenon, and the weakest point of the bobcat. Simple integer stuff that's always needed like branching is just ridiculously faster on bobcat.
Wait, so shaders go unused because RAM isn't somehow fast enough...or? What does the "memory subsystem" mean?
Because even on a 256bit bus it would have fairly limited bandwidth.
Having said that if it has 8GB of memory which seems to be a consistent rumor I'd pretty much guarantee it's DDR3, which implies the Embedded memory will act as an additional "fast" memory pool.
Without knowing the details of the fast pool it's hard to say if the limited main memory bandwidth is an issue.
IMO the number or texture reads/pixel has gone through the roof over the last few years, and having a large slow pool and a small fast pool is probably more of an issue today if textures are read from it then it was when 360 shipped.
Some of that could possibly be alleviated with larger caches, IME it's really hard to predict how a system will be have until you run code on it.
The FLOP figures everyone has attached performance (I guess we can blame Tim Sweeny for that) are only relevant if the bulk of shaders are ALU limited and I don't believe it's the case, for that to happen you have to have enough register space and cache to be able to hide memory reads. I would bet a fair amount of ALU resources are wasted on modern GPU's. I'd be willing to bet in the 40% range over the course of a frame.
The memory configuration is likely to be as big an indicator as performance as anything else.
Wait, so shaders go unused because RAM isn't somehow fast enough...or? What does the "memory subsystem" mean?
I swear I read a developer saying you need 1.5-2x the specs on PC to do equivalent console graphics. Cant find it though.It would 5X Xenos in raw terms, PLUS likely fantastically more efficient by being so much more modern. So I think 8X in real terms at minimum.
Which might be more impressive on a 5 year cycle rather than an 8 year one, but still is not bad and qualifies as a true generational leap imo.
And ERP (programmer) on B3D made a great post, basically about how the memory subsystem is likely to be more of a factor than shader ALU's. He stated at a guess 40% of shader resources go unused on PC code.
So basically imo it means theoretically you could take a 3 teraflop PC GPU, and it might be equivalent to a 1.5 TF console GPU fully utilized.
Here
Yes, that must be it. Or maybe I'm trying to give people a more balanced idea of what to expect in terms of architecture, from a position of someone who knows that there is more to CPU performance than just a core count, or a thread count, or an operating frequency.
Also, an accusation of favoritism coming from you is rich indeed.
A PR mouthpiece/Viral marketer. IMO... ProElite and Specialguy are the only ones I would consider real.
Two thoughts -
1) 1.7 typical IPC for the dual-issue Jaguar, even on highly optimized code strikes me as ridiculously optimistic. You'd only have a hope for code that almost never misses in L1 or mispredicts. Not sure what your experience has been with optimizing x86 but could you expand on how you think you can get such a big gain here?
2) You give the 0.2 IPC number for Xenon to make your comparison but then say it'd be much worse for simple integer stuff. That's double-penalizing it. 0.2 IPC was never specifically for the FP heavy streaming code (raw FLOPs) which Xenon was best at.
This is purely hypothetically speaking - not trying to make a real world case here at all - but if you do happen to have an algorithm that you can run with extremely deep software pipelining, is highly FMADD dominated, and extremely prefetch friendly, I think you can on average sustain higher throughput on a Xenon core than a Jaguar one despite having the same peak FP (never mind Bobcat which has half). The reason I say this is because you need two instructions for FADD + FMUL on Bobcat vs one FMADD on Xenon, which can be paired with FP loads and stores which are AFAIK designed to stream L2. On a wider processor this wouldn't really matter much but when comparing two-wide vs two-wide it can make a difference. Also, the huge number of registers can help. Although you don't really need them for scheduling purposes on Jaguar some FP kernels can still make better use of > 16 registers.
wait, specialguy is vetted? news to me (no offense special, just first ive heard that)
I will NEVER be able to unsee thatHis foot smashed up that taxi like it was nothing.
Interesting, I've always suspected that having one large pool of slow memory + eDRAM wasn't really going to cut it.
I was always rooting for 4GB of GDDR5 over 8 GB of DDR3..
His foot smashed up that taxi like it was nothing.