• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

So with Nvidia buying ARM, could PS6 or XBox actually go Nvidia?

llien

Member
Because despite the reasonable looking headline power consumption the system level power drain of the Intel mobile chips is too high for the sort of applications Atom was aimed at. This is, IMO, simply down to Intel not being committed enough to the market to put the money into moving their designs forwards. OK, they got off on the wrong foot with the original series of single-issue in-order Atom chips - they ticked the "low power consumption" box, but the performance was grossly deficient. But the Moorefield Atoms (AKA Z-series) were a huge improvement and were highly competitive with the contemporary ARM designs in performance terms - although they lost out a bit in system-level power consumption due to the dual-channel memory setup that same feature made them very fast in applications that demanded a lot of memory bandwidth.
If you'd check anand's in depth article I've linked in my previous post, you'd see that Atom was pretty damn good even among ARM chips.

The fact that Intel couldn't push it further, given they got sub 10W with subsequent full-blown mobile chips, is... unsubstantiated, I'd say?

What was missing in the whole picture is: WHY.
Why would anyone put an x86 CPU into a phone to begin with, given that most apps that care about CPU (and there is a number of those) arch are ARM?
Why would Intel sell Atom at dirt-cheap Atom levels?

The only answer that I could come up with is: because Windows Phone. But Microsoft was too late with that (let alone, whether Microsoft's business model was viable at all there) so, there was simply no reasonable use case for that chip.
 

Trimesh

Banned
If you'd check anand's in depth article I've linked in my previous post, you'd see that Atom was pretty damn good even among ARM chips.

The fact that Intel couldn't push it further, given they got sub 10W with subsequent full-blown mobile chips, is... unsubstantiated, I'd say?

What was missing in the whole picture is: WHY.
Why would anyone put an x86 CPU into a phone to begin with, given that most apps that care about CPU (and there is a number of those) arch are ARM?
Why would Intel sell Atom at dirt-cheap Atom levels?

The only answer that I could come up with is: because Windows Phone. But Microsoft was too late with that (let alone, whether Microsoft's business model was viable at all there) so, there was simply no reasonable use case for that chip.

That's actually exactly my point - they did come up with a series of chips that were highly competitive with the contemporary ARM parts, but then just let them wither on the vine by (presumably) not wanting to invest the money to keep them at a point where they were a plausible alternative to ARM. It's also worth pointing out that at one point Intel had by far the best performing ARM chips in the business, although that was largely by accident (Compaq sued Intel for patent violation of a bunch of patents they had obtained as part of the takeover of DEC, and the suit was settled by Intel buying the ex-DEC semiconductor IP from Compaq. As part of that, they got the rights to the StrongARM CPUs - but they did nothing with them and by the time they sold them on (to Marvell, IIRC) the rest of the market had caught up with them).

I personally think this low-effort approach is also why they are now under such threat from AMD.
 
We will see what happens when desktop high power apple chips are released and when companies like adobe release native versions of their apps. Right now the m1 hints that things are about to get ugly for x86.
Mac's are a closed ecosystem and they are expensive, there's a premium that they won't forgo specially if sales are growing and clients are satisfied - they'll more easily drop features you expect, to sell them back to you at an extra later on than step up their game just because. Their goal is not being in every computer, they want people to pay extra to have the privilege of having them.

Even if this thing was scalable to the rooftop and they could make a 64 core Mac Pro computer out of it seeing they don't need to, they won't (hence, them making a half size Mac Pro instead, just don't expect expandability to be the same as the required from a pro machine). If the x86 and ARM market weren't to react to the M1 you'll be stuck with M1-class performance for years as the focus will be on making the SoC cheaper for them and not increasing core count just because.


So it's not like you'll see a migration from people other than professionals (who don't mind the manufacturer deciding the supported life, limits features they can have and operating system of their device) and enthusiasts who don't oppose walled gardens (instead, mostly costumers who are part of that walled garden in some way shape or form, for instance, they already own iPhones/iPads).

For the M1 to make things hard for x86 directly it would have to be available to compete directly. It isn't and it won't be.


So, it'll drive interest in Windows for ARM and the like, and if powerful ARM competitors take a while to arrive, some devs might opt to use it to develop for powerful ARM CPU's in mind. But this is not optimal, as you don't even have native Linux (and Windows) install support on these macs. And the RAM limit is ridiculous for some tasks, and it is ridiculous even if they double it.

Regardless, big developers (Adobe, Microsoft, Google, Autodesk) will likely be more ready than they would if a competitive enough ARM chip comes along (which even if it doesn't is likely to help on the entry market, as phone SOC's get more powerful they'll surely eat away entry markets like the Netbook/Atom/Chromebook ones.


I have no doubt will see interesting things with this architecture, and some competition is really welcome and good, to an otherwise stagnant market. But yeah - it is Apple, and their agenda, while usually copycatted by companies without a clear direction is not about what's best for the market or sharing. If core shrinks stop at 5nm for a few years, they have a very clear engineering rooftop and in a competitive market, will have to start doing things Intel has been doing when stuck on 14nm's for years.

I doubt Apple wants to surpass 150mm² per (desktop) chip, 200mm² tops if they are cornered. There are a few creative ways out, and I have no doubt they'll somehow manage, but by managing I mean they'll manage in regards to their needs.
That's actually exactly my point - they did come up with a series of chips that were highly competitive with the contemporary ARM parts, but then just let them wither on the vine by (presumably) not wanting to invest the money to keep them at a point where they were a plausible alternative to ARM. It's also worth pointing out that at one point Intel had by far the best performing ARM chips in the business, although that was largely by accident (Compaq sued Intel for patent violation of a bunch of patents they had obtained as part of the takeover of DEC, and the suit was settled by Intel buying the ex-DEC semiconductor IP from Compaq. As part of that, they got the rights to the StrongARM CPUs - but they did nothing with them and by the time they sold them on (to Marvell, IIRC) the rest of the market had caught up with them).

I personally think this low-effort approach is also why they are now under such threat from AMD.
Problem with that was mostly pricing. Intel has high margins of their own and limited production capacity, they weren't interested in going as low as other SOC's and theirs required further investment.

Also, their production capacity is insufficient, producing more Atoms cpu's would quickly mean producing less more profitable CPU's for them. The solution would have been outsourcing but they were very against it.

It was a doomed niche, when it proved that x86 wasn't a selling point on that market which I think Intel expected. Then I reckon they were really unstable internally, I remember the Atom team was disbanded or thought they would be disbanded several times, plus budget cuts.

But I do think some of that engineering surely made it back to Core iX processors.
 
Last edited:

Trimesh

Banned
Problem with that was mostly pricing. Intel has high margins of their own and limited production capacity, they weren't interested in going as low as other SOC's and theirs required further investment.

Also, their production capacity is insufficient, producing more Atoms cpu's would quickly mean producing less more profitable CPU's for them. The solution would have been outsourcing but they were very against it.

My feeling is that not wanting to invest in stuff like that which isn't immediately profitable is a classic mistake that companies that are highly successful (and profitable) in one market commonly make. They seemed to be assuming that they could just make a part and then suddenly everyone would sign up to use it. I don't think they should have been surprised that people didn't care much about the fact it was x86 - most people don't know or care what the underlying architecture of the embedded system that's built into some device they use employs.

But in any case cutting of further development funding was a terrible move from an optics point of view - if they had carried on moving the platform forward they would have had a much better chance of getting more people interested, but in the end all they did was reinforce the idea that Intel doesn't give a shit about screwing over their small customers.
 
My feeling is that not wanting to invest in stuff like that which isn't immediately profitable is a classic mistake that companies that are highly successful (and profitable) in one market commonly make. They seemed to be assuming that they could just make a part and then suddenly everyone would sign up to use it. I don't think they should have been surprised that people didn't care much about the fact it was x86 - most people don't know or care what the underlying architecture of the embedded system that's built into some device they use employs.

But in any case cutting of further development funding was a terrible move from an optics point of view - if they had carried on moving the platform forward they would have had a much better chance of getting more people interested, but in the end all they did was reinforce the idea that Intel doesn't give a shit about screwing over their small customers.
No doubt it was a MASSIVE mistake, they were too deep into their business model to adapt fast enough when it started to backfire and now it is hurting them in more ways than one.

With the number of cores driving a palpable die size increase in 14nm and the lackluster 10nm yields their production capacity was maxed out to the point they had to move chipsets to older nodes, two generations ago, and that is still true for this upcoming generation. They also had to backport chips that were designed for 10nm's to 14nm's (Rocket Lake is a Sunny Cove 10nm backport).

In June, Jim Keller, one of Intel’s most influential chip designers, left the company reportedly over a dispute as to whether the company should outsource its production
Source: https://www.sdxcentral.com/articles...lores-intel-to-explore-external-fabs/2020/12/

Jim Keller supposedly left intel because he wanted intel to outsource some production. And now they're reportedly being forced to look into it, as their deadlines keep failing.

10nm Intel is not 10nm TSMC (not oranges to oranges), but they have serious capacity and yield problem there.

Their chips are still good, so one can only guess, if the yields were good and the node process they had access to were to be class leading, how would that compare. I'm guessing well, but expensive. Expensive is also going to be a problem going forward, when you have competition that comparatively, isn't.
 
Last edited:
Isn't tsmc already moving to 3nm this year?
I think next year.

Node density is not a good indicator, even TSMC says so at this point even though it's getting boasting benefits out of it.

Intel usually aimed to be better/smaller at the same node against other competitors, and the issue with 10nm was that they were too ambitious and didn't foresee a few of the (EUV) difficulties:



10 nm Intel is actually competitive with other manufacturers 7nm on paper (and there's another metric where they are actually ahead which is theoretical transistor density), the issue is yields are so bad, that they were stuck with dual core and no gpu for the first batch, and to this day there's no going past 4 cores.

This caused a few problems, namely, they didn't shrink the CPU as much as they could if yields were good and they were able to be aggressive, meaning gains where somehow very mitigated.

I'm not very informed if 3nm is supposed to be a big jump or not, but TSMC 5nm seems to have been in regards to density, against TSMC's own 7 nm.
 
how is intels node structure better than tscmc's?
They have a tradition of being more aggressive with their gate+fin node size targets than their competitors, this means they aim to do smaller transistors on each node they release. They compare favourably as transistor density "punches above their weight class".

Up to 14nm this was smooth sailing, from then on, and because they were too confident, it turned into a mud fight.

If, yields for their 10nm were any good, it would be on par with other manufacturers 7nm offerings.
TSMC-Process-Lead-Slides-20200427_Page_4.jpg


Now, overambitious structure and manufacturing is precisely what made intel's 10nm worse. It was very good on paper, but...

EDIT: this is a good article, sums a few things with context better than I could:

-> https://www.eejournal.com/article/no-more-nanometers/
 
Last edited:
They have a tradition of being more aggressive with their gate+fin node size targets than their competitors, this means they aim to do smaller transistors on each node they release. They compare favourably as transistor density "punches above their weight class".

Up to 14nm this was smooth sailing, from then on, and because they were too confident, it turned into a mud fight.

If, yields for their 10nm were any good, it would be on par with other manufacturers 7nm offerings.
TSMC-Process-Lead-Slides-20200427_Page_4.jpg


Now, overambitious structure and manufacturing is precisely what made intel's 10nm worse. It was very good on paper, but...

EDIT: this is a good article, sums a few things with context better than I could:

-> https://www.eejournal.com/article/no-more-nanometers/
Arigato Gozaimasu

Besides node shrink for x86, RISC-V, and ARM based microarchitectures, can 3-D stacking help extend MOORE's LAW and give boost in performance?

CNET article on 3D stacking
Intel Foveros 3D Stacking-The Verge

uQmq2Yx.jpg
 
Last edited:
can 3-D stacking help extend MOORE's LAW and give boost in performance?
I think the problem with 3d stacking is cooling. Also unless energy consumption keeps going down, you won't be able to continue progress for long.

In order to be able to double transistor count every few years, those transistors have to consume significantly less energy or energy consumption and thermals will start to grow significantly.

The other problem is that adding more steps to manufacturing might also increase costs. You need to be able to increase transistor count while decreasing cost of manufacturing per transistor.
 

docbot

Banned
There are rumors of Apple launching a 32 core arm based gpu this year already and they are supposedly working on a 128 core gpu..

Don't forget that the Macbook Air is the lowest computing product Apple offers. I would be suprised if their real ARM based pro desktop isn't going to wipe the floor with Intel and AMDs offerings.

And with Metal supposedly being somewhat compatible to Vulkan, Apple might cover some ground in the gaming department in the foreseeable future.

Also wouldn't put it past Nintendo to strike up a deal with Apple.
 
Last edited:

It's China, but they're the only guys so far I've seen doing ARM desktops. That it's China only probably means we won't be seeing any benchmarks in the wild AFAIK.
 

FStubbs

Member
There are rumors of Apple launching a 32 core arm based gpu this year already and they are supposedly working on a 128 core gpu..

Don't forget that the Macbook Air is the lowest computing product Apple offers. I would be suprised if their real ARM based pro desktop isn't going to wipe the floor with Intel and AMDs offerings.

And with Metal supposedly being somewhat compatible to Vulkan, Apple might cover some ground in the gaming department in the foreseeable future.

Also wouldn't put it past Nintendo to strike up a deal with Apple.
Apple has no incentive to make a deal with Nintendo to make hardware when Apple would rather sell you Apple hardware.

The only deal Apple would be interested in would be "put your games on iPhone and Mac".
 

Rikkori

Member
What can NV actually offer over AMD to consoles? The answer is - nothing. If console makers want an alternative to DLSS & more RT performance, they can have that, the former is already in the works & the latter was simply a choice they didn't want to make because the $/perf made no sense (and still doesn't). So why would they switch from already well established x86 tools & games to switch to ARM and NV - which is a greedy cunt that is known to not play well with partners is such ventures in the past (and remember leadership didn't change)? It just makes no sense.

Plus if you look at long-term roadmaps wrt MCM and the like, AMD is still in the best position to succeed on that front.
 

D.Final

Banned
Apple has no incentive to make a deal with Nintendo to make hardware when Apple would rather sell you Apple hardware.

The only deal Apple would be interested in would be "put your games on iPhone and Mac".
Now Apple create a VR headset
 
Top Bottom