• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next Xbox is ‘More Advanced’ Than the PS5 according to Insiders.

I’m 100% certain of the following:

- PS3 was originally intended to have multiple Cell chips. ALL the early Cell info focused on how multiple Cells worked together “like cells in the body”. It was only much later when they retroactively changed this to mean “software cells” aka the jobs that the PPE dispatches to the SPEs

- PS3 was late. Sony insisted it was releasing “early 2006” all the way until Feb 2006, when they changed it to November.

- RSX was the plan B after the multi-Cell thing didn’t work out. Everybody was surprised by the rumors that Sony had contracted Nvidia for something. Sony’s president was asked if it was for PS3 and he said something like “that’s ridiculous, we don’t need Nvidia’s help”

This is in line with how I recall the PS3 development. The GPU was never part of the original plan and added very late in development. I think this, in part, contributed to the very high price point. It's unfortunate the cell processor technology didn't really work out as intended. It looked promising at the time. Is this technology (cell processor) even used anymore? I know it was put into some televisions at one point.
 

SaucyJack

Member
Are you saying you need to be within 3ft to see the difference between 1080 & 4K? Because that's nuts. There's a massive jump in IQ and image quality in general as long as the source is a true 4K source and a good panel.

Having said that but I think the 4K -> 8K jump will not be as great. Native 4K content is already amazingly great as I would ever need. If they start introducing 100" panels then maybe but until then.

No I’m saying you need to be 3ft to see the difference between 4K and 8K
 
I sincerely think you and I have been watching two different conferences the past few years. The Xbox conference last year was almost perfection, in comparison the Playstation conference was an absolute disaster.

Sony's E3 2018 was the worst ever. Trailers of games we already knew about, games that have been PS4's thing since its existance. E3 2019 would be all about 3rd person cinematic games again so they thought it was better to not attend at all.
 

Panajev2001a

GAF's Pleasant Genius
I’m 100% certain of the following:

- PS3 was originally intended to have multiple Cell chips. ALL the early Cell info focused on how multiple Cells worked together “like cells in the body”. It was only much later when they retroactively changed this to mean “software cells” aka the jobs that the PPE dispatches to the SPEs

There was a GPU supplier before nVIDIA, Toshiba to be accurate (no vertex shaders / geometry processing on the card, just Triangle Setup, Pixel Shaders, texture units, and ROP’s IIRC). CELL as GPU did not make it out of patents / theory land. You are accurate that RSX was a late plan B thingy.
 
Last edited:

Dontero

Banned
There was a GPU supplier before nVIDIA, Toshiba to be accurate (no vertex shaders / geometry processing on the card, just Triangle Setup, Pixel Shaders, texture units, and ROP’s IIRC). CELL as GPU did not make it out of patents / theory land. You are accurate that RSX was a late plan B thingy.

CELL as GPU did make out of theory land as SPUs on Cell were constantly used for graphical tasks. In fact you can fire up youtube and look for raytracing stuff which was entirely done on CELL SPEs.
They stuff you are talking about like ROPS are backend that you could just attach to silicon or get it as separate silicon if needed and not that important.

There were 2 reasons why double CELL which was initial design didn't work out.

1. Cost. CELL turned out to be way more expensive that they predicted and moreover late. One Cell in PS3 already placed them in red so two of them would quickly sink Sony.

2. Paradigm shifts in how graphics were made.
When CELL was designed most of graphics creation was still not standardized. Back in PS2 days you had 2 VPUs which were precursors to SPEs in CELL and they were the one doing heavy lifting when it comes to graphics.
So CELL was natural progression on that front. Instead of 2 VPUs you would get ultra fast 8 SPEs per CELL.
It was perfectly reasonable for Ken Kutaragi to assume now legendary 120FPS for every PS3 game if they based their prediction on what could you do with PS2 level of graphical technology with PS3 hardware.

The paradigm shift came about 2002-2004. Shading was introduced as standard in games with Doom 3 being harbinger of things to come. Soon texture and vertexes became unimportant and shading power became main resource hog.
And this is something CELL couldn't do well because it was designed with old paradigm in mind.

So they had to throw out one CELL and get GPU that can do shading well. Nvidia chip alone also was actually behind paradigm because in shading world texture and vertex shading was combined by AMD while Nvidia GPU for PS3 still operated with separate shading modules so from get go it was worse than Xbox gpu.

And that is how PS3 was made. If we would still operate with PS2 graphics technology world then PS3 would absolutely murder competition graphically.
 

Ar¢tos

Member
I remember reading about the double CELL setup. It was the main CELL and a second modified one without the PPE and with some other customizations. What I read is that they gave up on the second CELL because developers (1st party) complained that they wanted a conventional GPU, since learning just how to work the CPU would be hard enough.
 

shark sandwich

tenuously links anime, pedophile and incels
There was a GPU supplier before nVIDIA, Toshiba to be accurate (no vertex shaders / geometry processing on the card, just Triangle Setup, Pixel Shaders, texture units, and ROP’s IIRC). CELL as GPU did not make it out of patents / theory land. You are accurate that RSX was a late plan B thingy.
This is the first I’ve ever heard about Toshiba working on a GPU for PS3. Perhaps you’re mistaking it for this:

IIRC Toshiba used that in some laptops for HD video deciding but that’s about it.
 

Panajev2001a

GAF's Pleasant Genius
CELL as GPU did make out of theory land as SPUs on Cell were constantly used for graphical tasks. In fact you can fire up youtube and look for raytracing stuff which was entirely done on CELL SPEs.
They stuff you are talking about like ROPS are backend that you could just attach to silicon or get it as separate silicon if needed and not that important.

The fact that CELL could be used for GPU like tasks and software rendering was not news, I am aware it was used for these and many more tasks (DICE had a clever way to use them in frostbite to help with geometry culling too)... any CPU could and if you take programmable and self-feeding wide vector processors with their own local storage and huge register file are of course a great candidate for such work... they sounds awfully close to any modern shading core ;).

What they lacked, and what the patent detailed (and what LRB added on top of their vector unit too) was fixed function hardware to fetch and filter textures, rasterize triangles, merge MSAA samples, blend render targets, etc...

There were 2 reasons why double CELL which was initial design didn't work out.

1. Cost. CELL turned out to be way more expensive that they predicted and moreover late. One Cell in PS3 already placed them in red so two of them would quickly sink Sony.

2. Paradigm shifts in how graphics were made.
When CELL was designed most of graphics creation was still not standardized. Back in PS2 days you had 2 VPUs which were precursors to SPEs in CELL and they were the one doing heavy lifting when it comes to graphics.

Funny thing is that the VU's in PS2 were much more similar to the capabilities of modern shaders (much more flexible than vertex shaders of that time)... but that is just an aside and to be frank not to dissimilar to the VLIW based shader cores you can find in older Radeon GPU's (think pre-GCN).

So CELL was natural progression on that front. Instead of 2 VPUs you would get ultra fast 8 SPEs per CELL.
It was perfectly reasonable for Ken Kutaragi to assume now legendary 120FPS for every PS3 game if they based their prediction on what could you do with PS2 level of graphical technology with PS3 hardware.

[...]

So they had to throw out one CELL and get GPU that can do shading well. Nvidia chip alone also was actually behind paradigm because in shading world texture and vertex shading was combined by AMD while Nvidia GPU for PS3 still operated with separate shading modules so from get go it was worse than Xbox gpu.

And that is how PS3 was made. If we would still operate with PS2 graphics technology world then PS3 would absolutely murder competition graphically.

I am aware of the many reasons why CELL based GPU's did not fly, despite being a lot of interest in the field much after CELL was launched and ended up not flying as high and as wide as expected. Also aware that the flexibility of having customised universal shader units provides maybe less peak performance in some tasks, but overall the flexibility it provides to developers (being able to dedicate all units to vertex shading or pixel shading depending on the workload is a massive boost... think z pre-pass)

The LRB project had the same basic philosophy: since GPU's are becoming more and more programmable and that is where they believe the next leap is, let's find the minimum possible fixed function HW that can take a sea of optimised x86/general purpose cores so that they become efficient at pursuing graphics tasks against modern GPU's while allowing developers with exploring the flexibility that software rendering allows.
There is a famous old Sweeney interview were he was also highly anticipating CPU's becoming fast enough at such massively parallel tasks to allow rendering to move back or mostly back to software.


The paradigm shift came about 2002-2004. Shading was introduced as standard in games with Doom 3 being harbinger of things to come. Soon texture and vertexes became unimportant and shading power became main resource hog.
And this is something CELL couldn't do well because it was designed with old paradigm in mind.

If anything SPE's would better at purely shading tasks then they would in a "texture based world" as they lack dedicated HW to process textures. The reason they lost and LRB lost too is that the performance advantage, developers tooling / programming model available on GPU's, and power consumption and silicon cost of the dedicated HW surrounding and running the shader cores was too great and is still too great to replace them with general purpose CPU's even with augmentations.
 

SaucyJack

Member
I think you are wrong.

Actually, no I am not.

As per my post that preceded the one you quoted I was referring to MY EXPERIENCE of 8K. You can have a different experience and/or opinion but I am the sole authority on my own experience.

Please feel free to talk about your own experience of seeing 8K and 4K displays side by side though.

My experience was that looking at side by side 65” 4K and 8K displays showing native content was that once you were beyond about 3ft (a distance where you can easily see the difference between 1080p and 4K) the difference was not at all obvious.

My opinion, as per my earlier post, is that 8K is not going to be very relevant for next gen gaming. The sets aren’t going to be affordable anytime soon and you’re going to need a big ass panel to be able to benefit from it.
 

Dontero

Banned
If anything SPE's would better at purely shading tasks then they would in a "texture based world" as they lack dedicated HW to process textures. The reason they lost and LRB lost too is that the performance advantage, developers tooling / programming model available on GPU's, and power consumption and silicon cost of the dedicated HW surrounding and running the shader cores was too great and is still too great to replace them with general purpose CPU's even with augmentations.

I was only infering here to paradigm rather than specifically to CELL itself. My point about "texture based world" is that previously you didn't have shaders, you just use one type of texture and that is it, no diffuse, normal etc. While it is true that CELL would do better in pure shading math my point here was broader and points out reason why CELL didn't do well with modern at the time games because it wasn't just shading math but whole package around it.

Also i disagree with idea that fixed function is going away like dodo.
The reason why we have more and more general pupose in GPU is because GPU makers use those GPUs more and more toward tasks not related to gaming.

Best example of that is recent tensor cores on Nvidia hardware. There is no point for them in gaming and silicon wasted on them could be used to boost up game stuff. But using gaming as extra option to fund hardware or fall back is default way of doing things in GPU space.

Hopefull there will be someone new on horizon who would focus only on gaming instead.
 
So I was wondering if these consoles are going to use ssd drives does that mean regular hard drives wont work on them? So if I want to get an external hd for my ps5 xbox will it need to be ssd
 
Lots of "A" in/with all these rumors/leaks :messenger_fearful:

Advanced
Arden
Arcturus
Argalus
Anaconda
Where are some of these names from?

EDIT: I think I found where some of this information is from:

They are doing a better job trying to decipher the hardware specs than NeoGAF is. We in here tend to get side tracked kind of hardcore and turn things into a console war scenario in every thread.
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
Also i disagree with idea that fixed function is going away like dodo.

Not saying that they are right... just that it was a much much wider bet than simply Sony and as GPU's become more and more programmable, once programmable blending becomes ubiquitous, and when custom operations become "simple" new instructions, and as CPU's are hitting a single threaded throughout wall (in terms of being able to improve overall performance beyond going massively multi-core)... well we will see, the two will look very very close :).

My point about "texture based world" is that previously you didn't have shaders, you just use one type of texture and that is it, no diffuse, normal etc
No normal, diffuse, gloss maps? Before Physically Based Shading became a thing, texture maps were how you would encode all that info. You could actually do some nice math playing with the texture matrix and using black and white textures to do per pixel lighting / projectors (setup a projection matrix for the light as the texture matrix, use vertices as texture coordinates, use the resulting data to sample a black and white texture to calculate if the vertex should be lit or if it shoudl be discared because backward facing or because it is behind the projector...).

Software only --> Software + fixed function HW --> Software only seems like a dance that keeps repeating as a new domain is found and explored (your Tensor Cores example is going outside of graphics and into machine learning / AI). I would not be surprised to see new Unified Shaders incorporating some extra HW for raytracing computation acceleration and the demise of specialised RTX cores for example...
 
Last edited:

PaNaMa

Banned
Microsoft needs to deliver a console that's 2X as powerful as Xbox One X. I expect ~11-12 TFLOPs. I just hope developers get into the habit of letting gamers choose between high framerate mode or higher fidelity in more titles next gen.
 

Dontero

Banned
No normal, diffuse, gloss maps? Before Physically Based Shading became a thing, texture maps were how you would encode all that info. You could actually do some nice math playing with the texture matrix and using black and white textures to do per pixel lighting / projectors (setup a projection matrix for the light as the texture matrix, use vertices as texture coordinates, use the resulting data to sample a black and white texture to calculate if the vertex should be lit or if it shoudl be discared because backward facing or because it is behind the projector...).

You are right but they were not really that expensive compared to proper shading that arrived in 2002-2004. Or should i say that the brunt of the work at the time was not on shaders. That change required fundamental shift in how gpus are designed. Before how much polygons GPU can push mattered but now no one says anything about that number because shading part is de facto most important.

This is pretty similar to how ray tracing existed in 90 but hardware shift presented by Nvdia arrived only recently.
 

HeisenbergFX4

Gold Member
I think I mentioned it before maybe even in this thread but when I looked at a 75" 8k sitting next to a 77" 4k when we backed up to 3-4 feet honestly I couldnt see a difference.

Up super close yeah there was but especially not at normal viewing distance for a display that size.

What did stand out on the 8k was the 2300 nits though.
 

ethomaz

Banned
I think I mentioned it before maybe even in this thread but when I looked at a 75" 8k sitting next to a 77" 4k when we backed up to 3-4 feet honestly I couldnt see a difference.

Up super close yeah there was but especially not at normal viewing distance for a display that size.

What did stand out on the 8k was the 2300 nits though.
That is not surprise at all because each resolution x tv size has a recommended distance to see all the details more pixels offer.

Said that the 8k is about the HDMI 2.1 max output resolution... game won't run next gen at native 8k with a 12-14TFs GPU... of course, some indie exceptions can exists but even that is hard to believe.
 
Last edited:

shark sandwich

tenuously links anime, pedophile and incels
I think I mentioned it before maybe even in this thread but when I looked at a 75" 8k sitting next to a 77" 4k when we backed up to 3-4 feet honestly I couldnt see a difference.

Up super close yeah there was but especially not at normal viewing distance for a display that size.

What did stand out on the 8k was the 2300 nits though.
Haven’t seen them myself, but all the impressions I’ve read on the Samsung 8K TV said that it’s pretty hard to tell the difference even when viewing the big-ass model from up close.

8K is the worst possible use of GPU resources. It’ll provide very little noticeable improvement even for the few people who actually have an 8K tv.
 

ethomaz

Banned
Haven’t seen them myself, but all the impressions I’ve read on the Samsung 8K TV said that it’s pretty hard to tell the difference even when viewing the big-ass model from up close.

8K is the worst possible use of GPU resources. It’ll provide very little noticeable improvement even for the few people who actually have an 8K tv.
It is not that useless... 8K needs less anti-aliasing to get the corners right compared to 4k.

But before enter in the it is a waste of GPU resources realm it is better to make clear no GPU next-gen will do proper native 8k render... even RTX Titan can't do that today.
 
Last edited:
I sincerely think you and I have been watching two different conferences the past few years. The Xbox conference last year was almost perfection, in comparison the Playstation conference was an absolute disaster.

Can I make an embarrassingly embarrassing statement? I missed E3 last year and I've only just realised.

If MS have have turned it around, then good but they still have Kinect, Tv Tv Tv, the unveiling of a car and mattricknoddingwhileraisinghands.gif

I know Sony has $599 and giant enemy crabs. This isn't a direct comparison though. (edit: as in, this isn't MS vs Sony)

Maybe I just miss the OMGZZZ moments from the early 360 years. MS knocked it out of the park.
 
Last edited:

The Alien

Banned
What a strange post in response to saying "good video", are you ok?
It's good because he goes on to say he has seen documents or something like that, therefor backing up the rumors that they are aiming higher/stronger whatever word floats your boat.
Given the fact that he had an Xbox member on his podcast the other day, if he would lie about such a thing it would come back really fast to haunt him.

Take a deep breath.

I nb heard the podcast. He had Ybarra on there who's literally like 2nd in command behind Phil at Xbox. Hes a pretty big deal at XBox.

He also had Brad Sams on there too. Peeps can say what they want about his YT vids, but he gets some pretty reliable info.

This thread is interesting. Very spicy and still a year+ away.
 
I nb heard the podcast. He had Ybarra on there who's literally like 2nd in command behind Phil at Xbox. Hes a pretty big deal at XBox.

He also had Brad Sams on there too. Peeps can say what they want about his YT vids, but he gets some pretty reliable info.

This thread is interesting. Very spicy and still a year+ away.
Yeah in my past experience Brad has been pretty spot on with his reporting.
 

Shin

Banned
I nb heard the podcast. He had Ybarra on there who's literally like 2nd in command behind Phil at Xbox. Hes a pretty big deal at XBox.

He also had Brad Sams on there too. Peeps can say what they want about his YT vids, but he gets some pretty reliable info.

This thread is interesting. Very spicy and still a year+ away.
At least someone does a bit of homework before going batshitcrazy, we have a lot of those :)
Following the "Dante" XDK image: https://imgur.com/a/1A6aoQ2
leads to: https://twitter.com/blueisviolet/status/1119804993580568576
leads to: http://pci-ids.ucw.cz/read/PC/1022/162b (Brad Sams said Arden is Anaconda and as you can see from the ID it supports the Tweet above)
leads to: https://i.imgur.com/WeU3mdq.png (I trust HMQGG, IIRC he was reliable on GAF and seems to be the case at QQera, Klondike IDK)
leads to: http://tiny.cc/mbgm5y (don't know who the hell that is but Russians love to code/hack so I'll take it he might understand all the above)

Then you have all the YouTube/Twitter people and some on QQera that say they have seen or know shit what Microsoft is doing so it leads to question.
It's a dead end though except for what Argarus might mean (also part of a constellation system?), can't find any info and running around in circles ATM.
That's all I got, someone can try and make sense of it that's actually into technology (whether it turns out to be true or false matters not - discuss the tech).
 

Ar¢tos

Member
It won't matter much, at least in the first years (regarding sales just because it is more powerful).
3rd party devs won't make games just for the new consoles because the install bases are small, so games will be crossgen and limited by the old consoles. It will be up to 1st party devs to show the potential of the new consoles and Sony has the advantage there with ND and SSM with the ICE team backing them.
It will be a while before MS can really show that they have the most powerful machine (if it is true). MS should focus on strategies for market penetration in EU and Asia (although Asia is turning into a lost cause for all home consoles).
 
Last edited:

SonGoku

Member
Why do you assume 4k will be standard when 1080p was not ? The reason why you got 4K support at all if because there were additional upgraded consoles for same games, meaning there was plenty of extra optional power.

With next gen consoles there will be no optional extra power and most of power will have to go into game itself instead of resolution. It would be insane for developers to waste 90% of console power just to get to some artifical resolution mark when they can improve graphics of the game in much bigger way by using that power for something else like extra shaders, more advanced lighting etc.

I fully expect that next gen will be 1080p defuault with some games mostly indie going into 4k territory. Then after a while they will release pro version that will give ability to go to 4k with all games.
Because next gen consoles will target 4k that much is certain, even if we get 13TF machines there will be some sub 4k games(1800p, 1400p etc) using CB
You are living in lala land if you think the target resolution will be 1080p.

Im glad you understand how much of a resource hog 4k and its approximates are, that's why anything under 10TF is crazy talk. Mark my words PS5 won't go below 11TF.
There was a GPU supplier before nVIDIA, Toshiba to be accurate (no vertex shaders / geometry processing on the card, just Triangle Setup, Pixel Shaders, texture units, and ROP’s IIRC). CELL as GPU did not make it out of patents / theory land. You are accurate that RSX was a late plan B thingy.
If thats really true Sony was insane to think they could make a GPU from scratch, should have gone with ATI (or even nvidia) from the get go.
CELL as GPU did make out of theory land as SPUs on Cell were constantly used for graphical tasks. In fact you can fire up youtube and look for raytracing stuff which was entirely done on CELL SPEs.
They stuff you are talking about like ROPS are backend that you could just attach to silicon or get it as separate silicon if needed and not that important.
Could a double CELL hypothetical setup really work? As in produce better visuals than PS3?
I have my doubts because there were things CELL was not so good at that RSX did. It was a good setup when optimized for it because when specific resource heavy graphical tasks were off loaded onto CELL's SPEs, it freed up a lot of RSX resources to push tasks that "traditional" GPUs are better at
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
If thats really true Sony was insane to think they could make a GPU from scratch, should have gone with ATI (or even nvidia) from the get go.

Had they gone to ATI or nVIDIA at the start yeah, they would have gotten a much better deal, but I guess they saw potential in working with their close Japanese partners (see Nintendo themselves with Art-X and PICA for GCN and 3DS respectively or the GPU of the DS) and they were essentially evolving the design of the previous GPU’s to their logical next step (same kind of choice MS made with the ESRAM and why they were shocked people were having so much problems with it... it was what the Xbox 360 used just with enhancements that fixed all the complaints people had with the eDRAM in the Xbox 360... small pool? Much bigger... high latency? Fixed... must write off screen render targets to main RAM and read then back in local vram? Fixed...).

Look at the strengths and weaknesses of PS2’s GS: small amount of dedicated VRAM which had very high bandwidth, not very flexible blending modes but very high fillrate pushing people to implement deep multipass algorithms (thanks to the polygon pushing power of the EE), etc....

EDRAM in this GPU was possibly a much much bigger pool (32-64 MB ? Dunno) at a very high bandwidth, very high fillrate yet programmable pixel shaders allowing more single pass (geometry submitted only once to the GPU), and possibly a fast bidirectional bus between CPU and GPU...

Problem is that too long for this might have been one of the things they underestimated, nVIDIA tools are not exactly shabby, and not being able to get the yields and power consumption under control may be what pushed Sony to switch (but there is speculation on my part for the exacts reasons why they switched at the last minute)... even if Toshiba has been able to deliver what they expected to, I do not think it is what delayed the launch by a year... if it had... mmmh...
 

SonGoku

Member
P Panajev2001a
The thing is all those you listed worked with a company that had experience in prior GPU designs: Nintendo ( ATI & DMP) MS (ATI)
I think its insanity that they chose someone with not background in GPU design to make one for them.

Also nvidia started working with Sony early 2003/late 2002 that gives them 3 full years of development for the Initially intended release date. Nvidia never had the intention of sharing their Tesla arch with Sony it seems, should have gone with ATI. Im really curious actually, why didn't they go with ATI.

The year delay was due to BD lasers afaik.
 
Last edited:

SaucyJack

Member
What does this word mean?

Wiki's definition: Astroturfing is the practice of masking the sponsors of a message or organization (e.g., political, advertising, religious or public relations) to make it appear as though it originates from and is supported by grassroots participants.
 

CyberPanda

Banned
Wiki's definition: Astroturfing is the practice of masking the sponsors of a message or organization (e.g., political, advertising, religious or public relations) to make it appear as though it originates from and is supported by grassroots participants.
Ah, got it. Thanks.
 

Panajev2001a

GAF's Pleasant Genius
P Panajev2001a
The thing is all those you listed worked with a company that had experience in prior GPU designs: Nintendo ( ATI & DMP) MS (ATI)

Art-X was yes made of ex Silicon Graphics engineers and did become a key part of ATI, but they were not known as a big GPU maker at that time, their claim to fame was the iQue processor IIRC. Certainly nobody expected a Graphics Core Next kind of revolution out of them and have ATI select them as the company cornerstone.

I think its insanity that they chose someone with not background in GPU design to make one for them.
I think you are being a bit harsh to the people that designed PS1 and especially PS2 graphics cores here... the biggest problem for me would have been, had they shipped the final product, tools and libraries to program that thing which would have been possibly quite powerful and balanced the CELL CPU quite well from what it was rumoured to be, but it would have required developers once again to adap instead of the HW adapting to them or following at least modern GPU trends on the PC and elsewhere.

Had PS3 shipped with that solution on time a year earlier, they could possibly have gotten away with it for a third time... maybe...

Also nvidia started working with Sony early 2003/late 2002 that gives them 3 full years of development for the Initially intended release date.

Do not really buy that “official” timeline from what I heard at the time the “switch” was announced: given the buggy chip nVIDIA delivered and how behind their own curve it was... it sounds like a “beggars can’t be choosers” kind of deal they offered...

Nvidia never had the intention of sharing their Tesla arch with Sony it seems, should have gone with ATI. Im really curious actually, why didn't they go with ATI.

The year delay was due to BD lasers afaik.

That is correct, I was just thinking about the “possibility” of that having any influence, not stating it did, but it certainly had an opportunity cost as they did not partner up with nVIDIA sooner.
 

Panajev2001a

GAF's Pleasant Genius
Only xbox had 'better' hardware right? PS2 was and still is my fav console off all time.

As expected given its release date, and how much an extra year of R&D meant in those days given the R&D speed back then, yes it was overall more powerful, but PS2 had a lot of cool tricks up its sleeves and some of the things it did well and with ease were a problem to do on other architectures and kept being so for quite some time (the speed / lack of impact you could switch render state and flush all sorts of buffers almost on a per triangle basis was quite incredible) so software designed to maximise it required some good investment to be designed around a different architecture.
 

SonGoku

Member
I think you are being a bit harsh to the people that designed PS1 and especially PS2 graphics cores here..
ArtX at least had the n64 gpu in their portfolio and what they set out to do with GC was much less complex than what PS3 demands.
Do you really think the primitive graphics processors found in the PS1/PS2 qualify them for making a much more complex GPU to meet next gen PS360 standards? Honest question, i don't know enough to form a opinion.

Sony should have taken a hint by looking at the PS2 and GC design philosophies, ironically i think it was MS who was taking notes lol.

Who made the GS anyways? wiki says Sony and Toshiba made the emotion engine but what about the GS?
 
Last edited:

TLZ

Banned
What does this word mean?

So many here. Pick any. Although I don't imagine the first one is what people here mean :messenger_grinning_smiling:

Were you around pre-xbo era? Man the amoung of Astroturfers during the disastrous launch of XBO was insane. Pretty much everyday someone was found astroturfing and got banned... lol
No I missed all that. I only discovered this place around 2015, when I was looking for Switch info.
 

Paulxo87

Member
The way sony played their hand unveiling the ps5 specs is more of a power move than most people think. 8k. Check. SSD check. Navi and zen 2. Check. What is MS going to announce at e3 that can theoretically top what Cerny shared with us.That they can do 16k? Sony will just readjust clock speeds and ram totals according based on what MS unveils at e3. The next few months are going to be intense
 
The way sony played their hand unveiling the ps5 specs is more of a power move than most people think. 8k. Check. SSD check. Navi and zen 2. Check. What is MS going to announce at e3 that can theoretically top what Cerny shared with us.That they can do 16k? Sony will just readjust clock speeds and ram totals according based on what MS unveils at e3. The next few months are going to be intense
If the Arcturus rumor is true and Microsoft’s GPU is a generation ahead then that will be a little difficult for Sony to top. But that’s only if that’s true, Which I doubt. But next generation is going to be a heavy hitting generation and both companies will come out swinging in my opinion.

I think Microsoft is going to try to get the most power, more exclusives, xCloud, more cross-play including xCloud, more backwards compatibility, bigger gamepass library for Xbox and PC, and so on. I don’t think Sony will slouch, I just think Microsoft will have a lot more to offer coming into next gen.
 
Last edited:

DeepEnigma

Gold Member
If the Arcturus rumor is true and Microsoft’s GPU is a generations ahead the. That will be a little difficult for Sony to top. But that’s only if that’s true, Which I doubt. But this generation is going to be a heavy hitting generation and both companies will come out swinging in my opinion.

Arcturus has been debunked already about being a “next gen” GPU. They also would not have a console out in time for Fall 2020 for AMDs next gen chip after Navi. Productions would barely be ready for the desktop realm.

 
Last edited:
Top Bottom