• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Ryzen Thread: Affordable Core Act

Nachtmaer

Member
So, as expected. Between Haswell-E and Broawdwell-E but at much better price. What disappoint me a bit is the OC potential. Looks like my 5930k @4.5ghz is competitive on all but the most parallel of computations. But that is normal with new silicon, or it has nothing to do?

It's hard to say at this point. Usually the main two factors for clockspeeds is the architecture and the process the chips are built on. From what I gathered Zen seems to be built to do clock reasonably high while not being a "speed demon" like Bulldozer or Netburst were. They basically strike a balance between frequency and IPC, like Intel does with Core.

I think the biggest reason holding clocks back is 14LPP being relatively immature and not as "good" as Intel's 14nm process. Ryzen seems to already be running close to its sweet spot so there isn't much headroom left before power draw skyrockets and you hit a hard ceiling/get diminishing returns (needing higher and higher voltages to get relatively small frequency bumps).
 

Durante

Member
Could this be to do with how the Summit Ridge die is divided up into two clusters of four cores? There's a semi-shared L3 in each cluster, but I haven't seen any info on how coherency and communication is handled between each cluster, so it may be that inter-process communication between threads on different clusters are what's causing these issues.
Yeah, that's why I'd like to see a core communication latency matrix (from/to each core). Would make it obvious if it's a cluster communication issue.
 

Sinistral

Member
Wow, as someone who only cares about gaming performance in my PC I am amazed how well the 4790K holds up even to this day.

I thought I might need a ryzen with the 1080 Ti but can just upgrade my 4790K PC with it since I am targeting 1440/60 short term then 4K HDR GSync later so CPU wont be a bottleneck

That's the dilemma a lot of people are facing these days. If you've bought a decent CPU within the last few years, there is very little reason to upgrade that CPU for gaming. Intel competing with itself has largely been met with shrugs and hums as their own iterative gains in performance are small.

Whatever small inclination AMD gives as competition can only be a good thing.
 

strata8

Member
It's hard to say at this point. Usually the main two factors for clockspeeds is the architecture and the process the chips are built on. From what I gathered Zen seems to be built to do clock reasonably high while not being a "speed demon" like Bulldozer or Netburst were. They basically strike a balance between frequency and IPC, like Intel does with Core.

I think the biggest reason holding clocks back is 14LPP being relatively immature and not as "good" as Intel's 14nm process. Ryzen seems to already be running close to its sweet spot so there isn't much headroom left before power draw skyrockets and you hit a hard ceiling/get diminishing returns (needing higher and higher voltages to get relatively small frequency bumps).

Essentially, check out the power readings from ComputerBase:

Yz6kigY.png


20% increase in power usage between the 1700X and 1800X (and that's total system power!) but if you look at the benchmarks the performance only goes up 5%. In contrast the difference in power usage between the i7-6800K and 6900K is similar, but the performance goes up 40%.

Looks like Zen is close to it's clockspeed limits, at least in this iteration. Extremely efficient though, it'll be a big win for AMD in the server market and it's looking very promising for their APUs.
 

dr_rus

Member
That's the dilemma a lot of people are facing these days. If you've bought a decent CPU within the last few years, there is very little reason to upgrade that CPU for gaming. Intel competing with itself has largely been met with shrugs and hums as their own iterative gains in performance are small.

Whatever small inclination AMD gives as competition can only be a good thing.

So what's the dilemma? =)
 

Thraktor

Member
Yeah, that's why I'd like to see a core communication latency matrix (from/to each core). Would make it obvious if it's a cluster communication issue.

If that is the problem, then how much control does a Windows developer have to optimise for this? Can they ensure that highly communicative threads are positioned on the same cluster, or would the Windows thread scheduler get in the way?
 

CaLe

Member
If I plan on coding, I assume a 1800X is still recommended ? There would obviously be some photoshop + 3DS Max work done too.
 

Nachtmaer

Member
Essentially, check out the power readings from ComputerBase:

Yz6kigY.png


20% increase in power usage between the 1700X and 1800X (and that's total system power!) but if you look at the benchmarks the performance only goes up 5%. In contrast the difference in power usage between the i7-6800K and 6900K is similar, but the performance goes up 40%.

Looks like Zen is close to it's clockspeed limits, at least in this iteration. Extremely efficient though, it'll be a big win for AMD in the server market and it's looking very promising for their APUs.

Yeah, that's pretty much what I meant. I'm sure things will get better over time as the process matures, similarly to how the RX 480's power draw was all over the place at launch too.

It makes me wonder if AMD are going to do a respin for a mid-gen refresh or if they're just going to ride this out until Zen+ or whatever they're going to call it for the 2000 series.
 

Durante

Member
If that is the problem, then how much control does a Windows developer have to optimise for this? Can they ensure that highly communicative threads are positioned on the same cluster, or would the Windows thread scheduler get in the way?
You can do it, but you'd have to do it by manual affinity setting. Which would mean game developers manually optimizing their thread mapping for specific CPUs. I don't see it happening on a relevant scale.
 

Durante

Member
I don't think you can make a fair comparison to TSMC. There's no huge high-end CPU running at 4 GHz on any TSMC process.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
You can do it, but you'd have to do it by manual affinity setting. Which would mean game developers manually optimizing their thread mapping for specific CPUs. I don't see it happening on a relevant scale.
No OS, to the best of my knowledge, currently presents apps with clusterization information, but what I've seen linux do is present clusters as NUMA nodes. Of course Windows CPU groups, being a clusterfuck of a CPU resource partitioning scheme, may not be capable of this.
 

StaSeb

Member
Managed my expectations and am fine with the reviews. My 1700X should arrive tomorrow and I am looking forward to build my System with it. Lets hope, that my board and memory work without too much trouble.
 

ethomaz

Banned
I don't think you can make a fair comparison to TSMC. There's no huge high-end CPU running at 4 GHz on any TSMC process.
But I can compare GPUs...

There are enough proof GF's 14nm is not on par with Intel's 14nm or TSMC's 16nm.
 

Kambing

Member
Man I am really bummed out about this -- cancelled my 1800x order as I mainly will be using it to game. Still stuck on a 2500k and would much rather the 1080ti and see where CPU's are in 1-2 years.

How do you guys think AMD stock will react?
 
Managed my expectations and am fine with the reviews. My 1700X should arrive tomorrow and I am looking forward to build my System with it. Lets hope, that my board and memory work without too much trouble.
I came to the same conclusion. The 1700X will be a fine upgrade from my 2500K and its ailing motherboard.
 
Man I am really bummed out about this -- cancelled my 1800x order as I mainly will be using it to game. Still stuck on a 2500k and would much rather the 1080ti and see where CPU's are in 1-2 years.

How do you guys think AMD stock will react?

14.14 USD Price decrease 0.82 (5.48%)
 

dr_rus

Member
dr_rus we can talk about how GF's 14nm is way behind others process now?... Ryzen is at the limit of clock, needs high vcore to overclock and the power draw is big.

The opposite of Intel 14nm or TSMC 16nm.

http://m.neogaf.com/showpost.php?p=231205109

No because GloFo's 14nm is essentially what Intel had on 22nm. 14nm of Intel is what GloFo and TSMC will have on ~10nm. You can't make any comparisons to TSMC here. Intel is a gen ahead of the industry in process tech still.
 

Steel

Banned
Man I am really bummed out about this -- cancelled my 1800x order as I mainly will be using it to game. Still stuck on a 2500k and would much rather the 1080ti and see where CPU's are in 1-2 years.

How do you guys think AMD stock will react?

If you want something cheap that'll up your 2500k for gaming, I'd suggest looking at how Ryzen 5 does. That being said, the stock's selling off today, but I don't expect that to last(The stock jerks around a lot).
 

Osiris

I permanently banned my 6 year old daughter from using the PS4 for mistakenly sending grief reports as it's too hard to watch or talk to her
Damn, I had hoped this would cause some downward price pressure on the 7700K. :/
 

Nachtmaer

Member
You can do it, but you'd have to do it by manual affinity setting. Which would mean game developers manually optimizing their thread mapping for specific CPUs. I don't see it happening on a relevant scale.

Not that I know anything about the real low level stuff and I don't remember where I read or heard this, but could this be related to how AMD kept the average latency between CCXs' L3s roughly equal? To me it sounds like they made a trade-off to go with higher L3 latency to make sure there isn't a big difference between the latency within a CCX or having to communicate with the other one.

I could be totally wrong though.
 

ethomaz

Banned
No because GloFo's 14nm is essentially what Intel had on 22nm. 14nm of Intel is what GloFo and TSMC will have on ~10nm. You can't make any comparisons to TSMC here. Intel is a gen ahead of the industry in process tech still.
Ok you changed your mind in the last 24 hours...

Ryzen will be out in a couple of days and it's very much on the same clocks / consumption level as what Intel has with their 14nm. So no idea what you're talking about.
 
Is anyone actually disappointed?

I like what I'm seeing. I can't wait to see how the Ryzen 5s look. If I can get $300-$800 performance for $100-$200, that's a positive.

If you care about gaming performance, you are probably disappointed with Ryzen. Games are just not the ideal candidate for throwing more cores at the problem and Ryzen's lower IPC and clocks really let them down in performance in this area.

Ryzen is a fantastic value chip for those working in highly parallel workloads though.
 

Weevilone

Member
Looking at this result leads me to think on how much performance Ryzen may gain yet with compilers being properly optimized for the architecture. Right now some programs just don't like the new architecture it seems.

That's the main thing to keep an eye on. As-is, I can't justify one for my gaming rig at this time. No need to fight the likely BIOS immaturity issues while waiting for things to get better on the gaming front.

Glad to see some competition.
 

kuYuri

Member
Ryzen 7 competes more with the Broadwell-E lineup due to the core/thread count and prices IMO. So if you're looking from a purely gaming standpoint, the Ryzen 7 line isn't the best, just like Broadwell-E isn't the best.

If you're looking to do more beyond gaming like video editing and other workstation tasks, Ryzen 7 is a great value proposition compared to Broadwell-E.

For gaming, Ryzen 5 and Kaby/Skylake will be the more interesting face-off.
 
Is anyone actually disappointed?

I like what I'm seeing. I can't wait to see how the Ryzen 5s look. If I can get $300-$800 performance for $100-$200, that's a positive.

Look at the stock price and that will tell if people are happy, disappointing or just meh.
 

Locuza

Member
Yeah, that's why I'd like to see a core communication latency matrix (from/to each core). Would make it obvious if it's a cluster communication issue.
PC Games Hardware did exactly this test:

2 + 2 Cores (16 MB L3$)
4 + 0 Cores ( 8 MB L3$)
Ryzen-R7-1800X-Test-CPU-Core-Scaling-Battlefield-1-pcgh.png

Ryzen-R7-1800X-Test-CPU-Core-Scaling-Watch-Dogs-2-pcgh.png


http://www.pcgameshardware.de/Ryzen-7-1800X-CPU-265804/Tests/Test-Review-1222033/

There are two more games (Rise Of The Tomb Raider and For Honor) but the difference is very small.

But I can compare GPUs...

There are enough proof GF's 14nm is not on par with Intel's 14nm or TSMC's 16nm.
There is only the comparison between the GP107 on Samsungs 14nm process and the other Pascal GPUs on TSMC 16nm:
https://www.computerbase.de/2016-10/geforce-gtx-1050-ti-test/7/
https://www.computerbase.de/2016-07/geforce-gtx-1060-test/7/

You could argue that well I see 10% better results on the GPUs from TSMC but even if true this isn't much.
 

dr_rus

Member
Ok you changed your mind in the last 24 hours...

Really? The fact that Ryzen is the same clocks/consumption as Intel doesn't tell us much about TSMC. It does tell us though that GloFo's 14nm process is very solid and thus it's unlikely that TSMC's is any better. Which is what I've said previously.
 

CryptiK

Member
Is anyone actually disappointed?

I like what I'm seeing. I can't wait to see how the Ryzen 5s look. If I can get $300-$800 performance for $100-$200, that's a positive.

You wont though. These benchmarks show exactly that. They struggle to get that performance with more cores. The 5 series have less cores and the same clock speed.
 

SRG01

Member
dr_rus we can talk about how GF's 14nm is way behind others process now?... Ryzen is at the limit of clock, needs high vcore to overclock and the power draw is big.

The opposite of Intel 14nm or TSMC 16nm.

http://m.neogaf.com/showpost.php?p=231205109

This is incorrect. It's the core count that's limiting the overclock, since the frequency becomes too high to synchronize across all cores.

Really? The fact that Ryzen is the same clocks/consumption as Intel doesn't tell us much about TSMC. It does tell us though that GloFo's 14nm process is very solid and thus it's unlikely that TSMC's is any better. Which is what I've said previously.

Correct.
 

ethomaz

Banned
There is only the comparison between the GP107 on Samsungs 14nm process and the other Pascal GPUs on TSMC 16nm:
https://www.computerbase.de/2016-10/geforce-gtx-1050-ti-test/7/
https://www.computerbase.de/2016-07/geforce-gtx-1060-test/7/

You could argue that well I see 10% better results on the GPUs from TSMC but even if true this isn't much.
10% more clock or less power consumption is a hell huge for CPU/GPU... you can increase the base clock from 4.0Ghz to close 4.5Ghz on CPU or from 1.5Ghz to close 1.7Ghz on GPU.

It is a big deal between process.
 

ethomaz

Banned
Really? The fact that Ryzen is the same clocks/consumption as Intel doesn't tell us much about TSMC. It does tell us though that GloFo's 14nm process is very solid and thus it's unlikely that TSMC's is any better. Which is what I've said previously.

This is incorrect. It's the core count that's limiting the overclock, since the frequency becomes too high to synchronize across all cores.



Correct.
That is not what technical comparisons amd articles says about GF's 14nm.

Intel and TSMC are a step ahead the others in silicon process.
 
Top Bottom