schwabdizzle
Banned
I am pretty skeptical about this. The infrastructure in the US will make this fail to the PS4. Of course I take this article with a grain of salt.
wtf are you talking about?! 7970? what? how? since when?The current devkits have 7970's inside them. If games were designed using that part, when moving to beta devkits (which btw most third parties don't have yet, and are quite pissed about) they will receive a downgrade of some form.
The current devkits have 7970's inside them. If games were designed using that part, when moving to beta devkits (which btw most third parties don't have yet, and are quite pissed about) they will receive a downgrade of some form.
The current devkits have 7970's inside them. If games were designed using that part, when moving to beta devkits (which btw most third parties don't have yet, and are quite pissed about) they will receive a downgrade of some form.
Speaking of memory, Norden hyped up the 8GB of GDDR5 RAM in the system as the type of memory that's currently usually found only on high-end graphics cards. Calling the RAM "expensive" and "exotic," Norden stressed that you "can't buy this [RAM] for 50 bucks... that's why high-end graphics cards cost as much as they do."
wtf are you talking about?! 7970? what? how? since when?
Durango has 7970???
Durango has 7970???
But (ignoring rumored reservations, which can be changed later anyway)...
0% more CPU
0% more setup engines
0% more RAM quantity...
And the bandwidth numbers might be misleading since Durango has ESRAM to work around that. It's expensive and it's there for a reason. PS3 also has 2X the bandwidth of 360 this gen, doesn't matter.
true i suppose. but again, setup engines same, triangles throughput same, etc.
go look at 7770 (durango) vs 7850 (orbis) benches. same game same settings might be say, 23 fps vs 35 or something like that. big, but somehow not ginormous,
yes you can, more or less. since ps3 can and does texture from xdr. the ps3 gpu gets fed with ~2x the main bandwidth the 360 gpu does.
But the 360 has 10MB of EDRAM...turns out to be pretty important...
Like I said, it's not there for nothing, it's expensive. Acting like it doesn't exist is silly.
It's much easier for one company to paint a prettier picture when the other one hasn't announced anything. If the roles were reversed, Microsoft would have done the same.
The current devkits have 7970's inside them. If games were designed using that part, when moving to beta devkits (which btw most third parties don't have yet, and are quite pissed about) they will receive a downgrade of some form.
That seems unbelievable. Developers wil be targetting a set specification, sp even if some internal specs are changed, games would still have been developed with a set spec in mind.
What you're suggesting simply isn't beliveable.
Hmm, look what I found
http://arstechnica.com/gaming/2013/...4s-hardware-power-controller-features-at-gdc/
Anyways, article may deserve a new thread? GDC presentation writeup.
What I mean is the power of these consoles will lead to lots of situations where games, because they can look even better on super high end pc hardware, may more regularly be displayed on pc than ever before, because devs will feel they can achieve more or less a similar look with changes more appropriate for each console without people noticing. I really believe people are underestimating just how difficult it will be for people to nitpick a really talented developers game visually this gen. A lot of power has been providing for them to do things for real they had to fake before, but now they have an even greater ability to fake ever more impressive things. Art and game design are going to be hugely important this gen for further distinguishing your game from all the other amazing titles that will surely be out there.
------- Oops, sorry, double post
Well his info so far has been almost 100% correct. There is no reason to doubt him.
If what thuway says it's true, we are looking at 1.2 (7770) vs 3.8 TFLOPS (7970). That's a 2.6 TFLOPS of downgrade right there. Such a move will undoubtedly cause quite a noticable graphical downgrade (unless developers knew that the final GPU will be much weaker and never used Radeon 7970 all the way).
Microsoft could very well allow developers to show much better looking games simply to start "Durango is better than PS4" post E3 talk and generate (albeit fake) hype. This would of course backfire immediately after Durango release when alpha vs final build comparisons start. Regardless of backfiring, initial (E3) impressions would likely stick in many gamers' heads and that alone could be enough for many to consider Durango equal or even better than PS4.
Another explanation is that perhaps Microsoft once pursued more powerful machine (initial rumors all pointed to Durango being more powerful than PS4) and gave up eventually deciding that price&media center capabilities > gaming power.
I'm speaking more in terms of money.
Anyone?
Yeah, it's GPU.
But those other comments I'm not to sure. We know they modified the GPU for compute processes, but we don't know how exactly.
Well his info so far has been almost 100% correct. There is no reason to doubt him.
If what thuway says it's true, we are looking at 1.2 (7770) vs 3.8 TFLOPS (7970). That's a 2.6 TFLOPS of downgrade right there. Such a move will undoubtedly cause quite a noticable graphical downgrade (unless developers knew that the final GPU will be much weaker and never used Radeon 7970 all the way).
Microsoft could very well allow developers to show much better looking games simply to start "Durango is better than PS4" post E3 talk and generate (albeit fake) hype. This would of course backfire immediately after Durango release when alpha vs final build comparisons start. Regardless of backfiring, initial (E3) impressions would likely stick in many gamers' heads and that alone could be enough for many to consider Durango equal or even better than PS4.
Another explanation is that perhaps Microsoft once pursued more powerful machine (initial rumors all pointed to Durango being more powerful than PS4) and gave up eventually deciding that price&media center capabilities > gaming power.
Thanks and it was unfortunately in the wrong thread.
That strategy didn't work out too well for Sony and the PS3...Would MS really risk the ridicule from what will be the potential early adopters?
The real problem is, that you can't compare graphical output not by the flop count of the used GPU. Just see it this way : 1.2 TFlops GPU in a closed environment >>> 1.2 TFlops in a PC.
Sure, there would be ridicule, but maybe Microsoft believes that it's worth the risk. Besides, their behavior regarding next gen is bizarre at best. They are dead silent about Durango, they are allowing Sony to spread their wings and generate hype without even attempting to interfere. Articles how "PS4 is great, easy to work with and powerful" are normal everyday thing and how does Microsoft react to that - with dead silence.
Considering the above, execs at Microsoft could very well think it's a smart move.
I am very much aware of that. I was comparing 7770 (rumored Durango GPU equivalent) against 7970 in a devkit (also a closed environment). It's true that there is probably much more overhead in a devkit then in a final console (devkit being development environment), but I very much doubt that overhead is so great that it would require 2.6 extra TFLOPS.
Well the non-beta devkits lack the move engines, the ESRAM pool and several other Durango-GPU stuff which is compensated by bruteforcing stuff into the 7970 (if true) and thus minimizing the practical power gap between both. So I don't think there will a downgrade as suggested by thuway. Devs just have to make use of the distinct Beta-GPU features once they get their hands on the Beta-Devkits.
Ah, wasn't aware of this. If that's the case, then you are probably right, they could be bruteforcing these features on 7970.
But I still do wonder, isn't 2.6 extra TFLOPS a bit excessive for emulating these features? We are talking about surplus of power greater than the entire FLOPS count of the final Durango.
Ah, wasn't aware of this. If that's the case, then you are probably right, they could be bruteforcing these features on 7970.
But I still do wonder, isn't 2.6 extra TFLOPS a bit excessive for emulating these features? We are talking about surplus of power greater than the entire FLOPS count of the final Durango.
Keep in mind PC's have higher API overheads and things like PCI bandwidth limitations.
Small things add up here and there. Final silicon won't be nearly as powerful, but since the chips are all directly on the PCB, it'll be much more efficient. No interfaces for it to deal with.
Ah, wasn't aware of this. If that's the case, then you are probably right, they could be bruteforcing these features on 7970.
But I still do wonder, isn't 2.6 extra TFLOPS a bit excessive for emulating these features? We are talking about surplus of power greater than the entire FLOPS count of the final Durango.
Do you also believe that there will be a drop in graphics once the devs are given the final kits or will they be about the same?
Well, they aren't going to know the constraints of the system until they use it. Why not shoot higher and then scale back later? It's easier to get the big work done and then drop down.
IIRC PS3 and 360 early works did the same thing.
The current devkits have 7970's inside them. If games were designed using that part, when moving to beta devkits (which btw most third parties don't have yet, and are quite pissed about) they will receive a downgrade of some form.
BTW I am predicating a downgrade based on what developer's have at the moment. Most third parties don't have beta devkits yet and are using uber powerful alpha kits with 7970s. You can make a damn gorgeous looking demo using those kits. The next devkits will break into the stage of optimization which will be a task that will last until the days of release.
IMHO devs will get the optimization done rather fast and games will run and look just like on the alpha kit.
But the GPU in the Alpha kits are significantly more powerful.
IMHO devs will get the optimization done rather fast and games will run and look just like on the alpha kit.
But the GPU in the Alpha kits are significantly more powerful.
Why would MS still have not released the beta dev kits to third parties?
As far as we know MS opted for the Durango project quite some time ago, and the specs didn't change that much. This is strange.
They should be way ahead of this.
Are the PS4 beta dev kits out in the open already?
Why would MS still have not released the beta dev kits to third parties? Did they have any serious problems at the foundries when making the chips?
So the whole "MS is going to show high end PC footage then hope no one notices the downgrade at launch" thing is being forwarded again ?
A lot of the stuff written is starting to reek , why the desperation for MS to fail , it's embarrassing the desperation to believe any so called insider as long as the news is negative.
But the GPU in the Alpha kits are significantly more powerful.
So the whole "MS is going to show high end PC footage then hope no one notices the downgrade at launch" thing is being forwarded again ?
A lot of the stuff written is starting to reek , why the desperation for MS to fail , it's embarrassing the desperation to believe any so called insider as long as the news is negative.
None of this makes a difference if MS has properly communicated what to expect from the final system.
The hardware is there for the devs to have something work on and become familiar with the tools. The final config of the system shouldn't be a surprise to any dev that's been receiving dev kits this whole time.
Lol, talk about moving goal posts. You don't have that info.
They'll have a general idea, but they still don't know the full constraints until they use it. What's hard to get? I'm not saying it'll go from PC super duper settings to shit, I'm saying it won't look the same.
Basic Hardware Specifications
Xenon is powered by a 3.5+ GHz IBM PowerPC processor and a 500+ MHz ATI graphics processor. Xenon has 256+ MB of unified memory. Xenon runs a custom operating system based on Microsoft® Windows NT®, similar to the Xbox operating system. The graphics interface is a superset of Microsoft® Direct3D® version 9.0.
CPU
The Xenon CPU is a custom processor based on PowerPC technology. The CPU includes three independent processors (cores) on a single die. Each core runs at 3.5+ GHz. The Xenon CPU can issue two instructions per clock cycle per core. At peak performance, Xenon can issue 21 billion instructions per second.
The Xenon CPU was designed by IBM in close consultation with the Xbox team, leading to a number of revolutionary additions, including a dot product instruction for extremely fast vector math and custom security features built directly into the silicon to prevent piracy and hacking.
Each core has two symmetric hardware threads (SMT), for a total of six hardware threads available to games. Not only does the Xenon CPU include the standard set of PowerPC integer and floating-point registers (one set per hardware thread), the Xenon CPU also includes 128 vector (VMX) registers per hardware thread. This astounding number of registers can drastically improve the speed of common mathematical operations.
Each of the three cores includes a 32-KB L1 instruction cache and a 32-KB L1 data cache. The three cores share a 1-MB L2 cache. The L2 cache can be locked down in segments to improve performance. The L2 cache also has the very unusual feature of being directly readable from the GPU, which allows the GPU to consume geometry and texture data from L2 and main memory simultaneously.
Xenon CPU instructions are exposed to games through compiler intrinsics, allowing developers to access the power of the chip using C language notation.
GPU
The Xenon GPU is a custom 500+ MHz graphics processor from ATI. The shader core has 48 Arithmetic Logic Units (ALUs) that can execute 64 simultaneous threads on groups of 64 vertices or pixels. ALUs are automatically and dynamically assigned to either pixel or vertex processing depending on load. The ALUs can each perform one vector and one scalar operation per clock cycle, for a total of 96 shader operations per clock cycle. Texture loads can be done in parallel to ALU operations. At peak performance, the GPU can issue 48 billion shader operations per second.
The GPU has a peak pixel fill rate of 4+ gigapixels/sec (16 gigasamples/sec with 4? antialiasing). The peak vertex rate is 500+ million vertices/sec. The peak triangle rate is 500+ million triangles/sec. The interesting point about all of these values is that theyre not just theoreticalthey are attainable with nontrivial shaders.
Xenon is designed for high-definition output. Included directly on the GPU die is 10+ MB of fast embedded dynamic RAM (EDRAM). A 720p frame buffer fits very nicely here. Larger frame buffers are also possible because of hardware-accelerated partitioning and predicated rendering that has little cost other than additional vertex processing. Along with the extremely fast EDRAM, the GPU also includes hardware instructions for alpha blending, z-test, and antialiasing.
The Xenon graphics architecture is a unique design that implements a superset of Direct3D version 9.0. It includes a number of important extensions, including additional compressed texture formats and a flexible tessellation engine. Xenon not only supports high-level shading language (HLSL) model 3.0 for vertex and pixel shaders but also includes advanced shader features well beyond model 3.0. For instance, shaders use 32-bit IEEE floating-point math throughout. Vertex shaders can fetch from textures, and pixel shaders can fetch from vertex streams. Xenon shaders also have the unique ability to directly access main memory, allowing techniques that have never before been possible.
As with Xbox, Xenon will support precompiled push buffers (command buffers in Xenon terminology), but to a much greater extent than the Xbox console does. The Xbox team is exposing and documenting the command buffer format so that games are able to harness the GPU much more effectively.
In addition to an extremely powerful GPU, Xenon also includes a very high-quality resize filter. This filter allows consumers to choose whatever output mode they desire. Xenon automatically scales the games output buffer to the consumer-chosen resolution.
Memory and Bandwidth
Xenon has 256+ MB of unified memory, equally accessible to both the GPU and CPU. The main memory controller resides on the GPU (the same as in the Xbox architecture). It has 22.4+ GB/sec aggregate bandwidth to RAM, distributed between reads and writes. Aggregate means that the bandwidth may be used for all reading or all writing or any combination of the two. Translated into game performance, the GPU can consume a 512?512?32-bpp texture in only 47 microseconds.
The front side bus (FSB) bandwidth peak is 10.8 GB/sec for reads and 10.8 GB/sec for writes, over 20 times faster than for Xbox. Note that the 22.4+ GB/sec main memory bandwidth is shared between the CPU and GPU. If, for example, the CPU is using 2 GB/sec for reading and 1 GB/sec for writing on the FSB, the GPU has 19.4+ GB/sec available for accessing RAM.
Eight pixels (where each pixel is color plus z = 8 bytes) can be sent to the EDRAM every GPU clock cycle, for an EDRAM write bandwidth of 32 GB/sec. Each of these pixels can be expanded through multisampling to 4 samples, for up to 32 multisampled pixel samples per clock cycle. With alpha blending, z-test, and z-write enabled, this is equivalent to having 256 GB/sec of effective bandwidth! The important thing is that frame buffer bandwidth will never slow down the Xenon GPU.
Audio
The Xenon CPU is a superb processor for audio, particularly with its massive mathematical horsepower and vector register set. The Xenon CPU can process and encode hundreds of audio channels with sophisticated per-voice and global effects, all while using a fraction of the power of a single CPU core.
The Xenon system south bridge also contains a key hardware component for audioXMA decompression. XMA is the native Xenon compressed audio format, based on the WMA Pro architecture. XMA provides sound quality higher than ADPCM at even better compression ratios, typically 6:112:1. The south bridge contains a full silicon implementation of the XMA decompression algorithm, including support for multichannel XMA sources. XMA is processed by the south bridge into standard PCM format in RAM. All other sound processing (sample rate conversion, filtering, effects, mixing, and multispeaker encoding) happens on the Xenon CPU.
The lowest-level Xenon audio software layer is XAudio, a new API designed for optimal digital signal processing. The Xbox Audio Creation Tool (XACT) API from Xbox is also supported, along with new features such as conditional events, improved parameter control, and a more flexible 3D audio model.
Input/Output
As with Xbox, Xenon is designed to be a multiplayer console. It has built-in networking support including an Ethernet 10/100-BaseT port. It supports up to four controllers. From an audio/video standpoint, Xenon will support all the same formats as Xbox, including multiple high-definition formats up through 1080i, plus VGA output.
In order to provide greater flexibility and support a wider variety of attached devices, the Xenon console includes standard USB 2.0 ports. This feature allows the console to potentially host storage devices, cameras, microphones, and other devices.
Storage
The Xenon console is designed around a larger world view of storage than Xbox was. Games will have access to a variety of storage devices, including connected devices (memory units, USB storage) and remote devices (networked PCs, Xbox Live). At the time of this writing, the decision to include a built-in hard disk in every Xenon console has not been made. If a hard disk is not included in every console, it will certainly be available as an integrated add-on component.
Xenon supports up to two attached memory units (MUs). MUs are connected directly to the console, not to controllers as on Xbox. The initial size of the MUs is 64 MB, although larger MUs may be available in the future. MU throughput is expected to be around 8 MB/sec for reads and 1 MB/sec for writes.
The Xenon game disc drive is a 12? DVD, with an expected outer edge throughput of 16+ MB/sec. Latency is expected to be in the neighborhood of 100 ms. The media format will be similar to Xbox, with approximately 6 GB of usable space on the disk. As on Xbox, media will be stored on a single side in two 3 GB layers.
Industrial Design
The Xenon industrial design process is well under way, but the final look of the box has not been determined. The Xenon console will be smaller than the Xbox console.
The standard Xenon controller will have a look and feel similar to the Xbox controller. The primary changes are the removal of the Black and White buttons and the addition of shoulder buttons. The triggers, thumbsticks, D-pad, and primary buttons are essentially unchanged. The controller will support vibration.
If you ask me 7970 rumor is bullshit or the 1.2Tflops beta gpu is bullshit.
That's a 2.6 TFlops something difference if you ask me that is fucking over kill.
That would mean the Beta gpu must have 100% efficiency and the alpha must operate on 32% efficiency. Which is bullshit. You could probably simulate everything you needed with a 7870.
Something doesn't add up.
Or they changed directions and vision when they went from alpha to beta design.
Alpha design xbox division was still run with gamers in mind maybe pre kinect raze?
And when beta was designed post kinect raze leadership was flushed and the MBA took over.
Wanted more of a service box instead of a gamer box.
Or the documentations are missing information microsoft only disclosed in person to the techleads also a way to combat leaks from documents.
Or maybe i just want a stronger box then rumors are pointing at.
/Tin foil hat ramblings of a disappointed gamer.