• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox Next /Xbox 720 / Durango Patent?

StevieP

Banned
Would a powerful cpu provide such an advantage since most games are... well... gpu based? (stream processing, etc)

Depends on which aspect of the game, I guess. In terms of strictly visuals, obviously not. But as bkilian put it "the games will adapt to their environment".

There's no way DDR3 will be in the next Xbox(as primary memory) it's just not enough bandwidth any way you measure it for gaming in HD, possibly DDR4 with a big chunk of EDRAM I guess.

DDR4 is basically DDR3 but slightly faster.

SquiddyBiscuit said:
Maybe there will be 1-2Gigs of DDR3 memory for the variant of Windows 8 that they will use as a main OS?

Out of their entire pool of DDRx, there will be reserved memory for that kind of stuff. The current amount is unknown as far as I know, but the number "3" has been thrown around here.

Ales said:
If MS really heard Epic,Crytek,Dice,ecc...the rumor on the 1 TFLOPS GPU does not make sense.
Epic has been pretty clear if you want a technological leap the GPU must be close to 2 TFLOPS.

http://www.neogaf.com/forum/showthread.php?t=477663
Tim Sweeney:
Unreal Engine 4’s next-generation renderer targets DirectX 11 GPU’s and really starts to become interesting on hardware with 1+ TFLOPS of graphics performance, where it delivers some truly unprecedented capabilities. However, UE4 also includes a mainstream renderer targeting mass-market devices with a feature set that is appropriate there.
 

tinfoilhatman

all of my posts are my avatar
I wonder how big a chunk of EDRAM(1080p\60) they would need if they went with the highest speed DDR4?

"According to Samsung, DDR4 technology offers the best performance out of all memory products currently available, and it's expected to reach twice the current 1,600Mbps throughput of DDR3 by next year. DDR4 modules also require just 1.2 volts, reducing power consumption by about 40 percent compared to DDR3 memory operating at 1.35V."

http://hothardware.com/News/Samsung-Waves-Around-Industrys-First-16GB-DDR4-Memory-Modules/

Sounds like it's more than just slightly faster , I think most developers could live with 4-8GB DDR4(the highest speed) and a huge chunk of EDRAM, it'll be an interesting next generation if Sony and Microsoft take drastically different approaches to memory.
 

StevieP

Banned
I wonder how big a chunk of EDRAM(1080p\60) they would need if they went with the highest speed DDR4?

"According to Samsung, DDR4 technology offers the best performance out of all memory products currently available, and it's expected to reach twice the current 1,600Mbps throughput of DDR3 by next year. DDR4 modules also require just 1.2 volts, reducing power consumption by about 40 percent compared to DDR3 memory operating at 1.35V."

http://hothardware.com/News/Samsung-Waves-Around-Industrys-First-16GB-DDR4-Memory-Modules/

Sounds like it's more than just slightly faster , I think most developers could live with 4-8GB DDR4(the highest speed) and a huge chunk of EDRAM, it'll be an interesting next generation if Sony and Microsoft take drastically different approaches to memory.

DDR4 is starting at 2133, and DDR3 right now is faster than that (I've seen DDR3-3000). The primary benefit at DDR4's infancy is power consumption. Eventually that will lead to higher speeds as well (much higher speed, according to Samsung/Micron/etc), but keep in mind MS needs to be manufacturing this box by mid next year. With that said, even with the higher speed and lower power consumption of DDR4, it's not G-spec. As mrklaw mentioned (in regards to my comment on the Geforce 650m on DDR3 vs GDDR5) you do lose some performance when you look at that comparison on the same graphics chip.

I am only speculating here, but the final box would probably utilize EDRam of at least 32mb if they plan on sticking to a full DDRx setup.
 

segarr

Member
You're giving me reason.Epic has spoken of GPU 1+ TFLOPS.
1+ means at least 1.5 / 1.8 TFLOPS certainly not 1/1,2 TFLOPS.Also a few months before E3 Tim Sweeney had said 2.5 TFLOPS.
So a next xbox with a GPU of 1 TFLOPS would be an epic fail.In contrast with the demands of Epic,Crytek,Dice,ecc..

Yeah, I have no idea why they'd not make a beast machine. Who cares if they have to take a hit on consoles sold, it's not like they're short on cash!
 

Ales

Neo Member
No - 1+ means 1+.

What difference would there be between a GPU of 1 TFLOPS and a GPU 1.1 / 1.2 TFLOPS?None.
I do not understand how a GPU so poor means having heard Epic, Crytek, Dice, etc.
It is evident that with 1+ Epic intends GPU close to 2 TFLOPS.

News

Epic Games' vice-president Mark Rein has confirmed that the company has been in talks with Sony and Microsoft to help "shape" the next-generation of consoles, adding that he's happy to wait until next-gen hardware can offer "a massive leap in performance and capabilities than get something today."

"If you're talking about the console you plug into the wall at home, I think that needs to be a really big jump," said Rein talking to VideoGamer.com at Develop about next-gen hardware.

"I think it needs to be a really good justifiable, 'Oh my gosh, look what you can do now that you couldn't do before'. And to do that at a reasonable price it just takes time.

"It's going to come out whenever it comes out," he continued, "and again, the whole do it right versus right now thing, I'd much rather get a massive leap in performance and capabilities than get something today."

Rein added that the firm's GDC 2011 Samaritan demo, which showed a cyberpunk peacekeeper battling thugs in a gritty city street, "was a demo to show what we think the consoles should... what we would like the next gen consoles to be able to do.

"In determining what the next consoles will be, I'm positive that [Sony & Microsoft are] talking to lots and lots of developers and lots of middleware companies to try and shape what it is. We've certainly been talking with them and we've been creating demonstrations to show what we think.

"And obviously the Elemental demo, same thing. We're certainly showing capability if they give us that kind of power, but so is everybody else."

http://www.videogamer.com/xbox360/g...ive_leap_in_next-gen_console_performance.html

As you can see Epic continues to ask powerful machines.
If the GPU is really from 1 TFLOPS MS has not heard anyone But this contradicts your other post
 

StevieP

Banned
What difference would there be between a GPU of 1 TFLOPS and a GPU 1.1 / 1.2 TFLOPS?None.
I do not understand how a GPU so poor means having heard Epic, Crytek, Dice, etc.
It is evident that with 1+ Epic intends GPU close to 2 TFLOPS.

No, 1+ literally means 1+.
It could refer to a GPU that is 1.1tf or it could refer to a gpu that is 90064u43590438590438.1tf, but it literally means 1+

Don't confuse the meaning. It's in Epic's best interests to push platform makers so that they can sell a new engine rather than have licensees continue to use UE3 (as is going to be the case for the first couple years anyway). That doesn't mean that platform makers have to bow to their every demand to the tune of billions of dollars. Refer to that link I posted above from bkilian.
 

Ales

Neo Member
No, 1+ literally means 1+.
It could refer to a GPU that is 1.1tf or it could refer to a gpu that is 90064u43590438590438.1tf, but it literally means 1+.

I'm sorry but I can not understand how a 1.1 TFLOPS GPU could mean having heard Epic
As you wrote in this post
"They listened to Crytek/Dice/etc is what they did. That's the big change from the leaked 2010 stuff that I know of, and could possibly be what was heard by Alberto on the grapevine. "
MS may very well have bet on a console so underpower but this means not listening to anyone.
 

StevieP

Banned
I'm sorry but I can not understand how a 1.1 TFLOPS GPU could mean having heard Epic
As you wrote in this post
"They listened to Crytek/Dice/etc is what they did. That's the big change from the leaked 2010 stuff that I know of, and could possibly be what was heard by Alberto on the grapevine. "
MS may very well have bet on a console so underpower but this means not listening to anyone.

I was talking about ram. There is nothing "underpowered" about an 8-core x86 console with 8gb of DDR3/DDR4 and a 1+tf GPU (whether that's 1.1, 1.2, 1.3, whatever). That's still orders of magnitude more powerful than the 360 and will produce awesome results in that context, despite the fact that some of the processing power and memory will be tied up elsewhere.

I'm not sure what the problem is. Even a Cape Verde level GPU with 1.2-1.3tf would be more than double the Wii U's 5-600gf, and not that far off the PS4's 1.8tf.
There are 2 BF3 screenshots on the previous page that are a very simplified way of looking at the differences.
 

USC-fan

Banned
Stevie - I have already told that bf3 screen shot mean nothing. You are making yourself look foolish keep talking about it. Wiiu at this point would be lucky to be over 450 gflops, IMO.
 

mrklaw

MrArseFace
.

I'm not sure what the problem is. Even a Cape Verde level GPU with 1.2-1.3tf would be more than double the Wii U's 5-600gf, and not that far off the PS4's 1.8tf.

As meaningless as raw flops are, 1.8 is 50% more than 1.2, more than 'not far off' IMO. And with potentially slower ram so hitting that number might be more difficult.
 

StevieP

Banned
Stevie - I have already told that bf3 screen shot mean nothing. You are making yourself look foolish keep talking about it. Wiiu at this point would be lucky to be over 450 gflops, IMO.

It's being run on a 7770 on low (1.3tf card) at a stable framerate, and a 7850 on high (1.8tf card) at a stable framerate.

Sure, PC API thickness yadda yadda but it's a way to visualize 2 GPUs with those 2 levels of hardware grunt playing a current game.

mrklaw said:
As meaningless as raw flops are, 1.8 is 50% more than 1.2, more than 'not far off' IMO. And with potentially slower ram so hitting that number might be more difficult.

If the MS console has less GPU power available to it, games will scale and adapt accordingly.
 

USC-fan

Banned
It's being run on a 7770 on low (1.3tf card) at a stable framerate, and a 7850 on high (1.8tf card) at a stable framerate.

Sure, PC API thickness yadda yadda but it's a way to visualize 2 GPUs with those 2 levels of hardware grunt playing a current game.

Its a PC game it has nothing to do with console. They post benchmarks in PC games not screen shots. You have one running 60 fps on high and one running 5 fps on high. Hey guess what they look the same but one is unplayable.
 
Refreshing this thread before the Xbox announcement as it's also my view of what's coming.

ARM IP for OS and STB functionality instead of APU + GPU and all AMD. This was obvious after the Feb 20th PS4 reveal.

The following block diagram is likely very similar to the PS4.

System manager is the Trustzone processor. System memory is in the second chip and managed by the trustzone processor. APPs and System/Platform APIs are in Trustzone managed memory. Notice platform CPU outside the APP SoC directly communicates with the App SoC. It's likely that the one Jaguar CPU runs in LPDDR3 memory in the second SoC managed by the Trustzone processor. This complies with Cerny's description of APU is controlled by Second SoC until turn on then by one Jaguar CPU. App CPU and GPU is in the second chips SoC. AMD APU is the platform CPU and GPU.

qzfSg.png
 
Top Bottom