• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Ryzen Thread: Affordable Core Act

tim.mbp

Member
b20170329_2.jpg


Im waiting for this so next month I can do a ITX build.

Is that the only ITX board that's been announced?
 

Datschge

Member
·feist·;233771095 said:
Note, Tech Report and TechPowerUp used only Nvidia GPUs for their testing.
Would be really interesting to see if and how that specific chart would differ with AMD GPUs. Maybe once Vega is out.

I wonder if Rocket League would perform better on Kepler cards like the 780 TI, since they might have spent more effort on DX9 with the older generations of GPUs.
It may well. Though the oddity is Nvidia randomly performing 25% weaker on Rzyen than on Intel for seemingly no reason.
 

thelastword

Banned
So it sounds like the 1600 and 1500 trade blows with the 7600k at cheaper price points. Also sounds like a 1400 isn't something people should be looking at unless they really wanna go budget.
The 1400 is pretty good too, it's just that it has half the L3 cache that the other R5's have, the higher the cache.. the higher the clocks.. and the higher the memory, yields better performance as a result.

So yes, it's a cost saving measure and Ryzen seems to perfom even better with higher L3 cache, but in that case, getting a good kit of ram and overclocking your 1400 should improve performance over the 7400 in the forseeable future...

At this point, I'm thinking the R3's will also have the same 8Mb of L3 cache as the R5 1400...
 

Datschge

Member
At this point, I'm thinking the R3's will also the same 8Mb of L3 cache as the R51400...
The top R3 model may have the same base/turbo clock as R7 1800X and R5 1600X and 16Mb L3 cache just to have an oddly balanced 4c4t model that at first may seem like a good deal. ^^
 

Paragon

Member
That turned out to be jumping to the wrong conclusion, by the way. The actual issue is that some games runs worse on Nvidia cards in DX12 compared to DX11. This was true six months ago, and Ryzen hasn't magically fixed it.
There may be some small performance gains to be made by Nvidia optimisation, but it's not going to be the silver bullets that will make Ryzen vastly superior to Intel in all gaming benchmarks, despite what some plonkers in /r/AMD believe.
A lot of the explanations I've seen don't really present the issue very well, or people just post graphs without explaining what they represent.

It seems that, with AMD GPUs, Ryzen and Intel CPUs perform the same.
But with NVIDIA GPUs, Ryzen is performing much worse than Intel in some games.

3a8lgbc0lxqys6uw3.png


Ignore that the RX480 runs the game at 180 FPS vs 160 FPS on the GTX 1060.
The issue is that the 1060 drops from 160 FPS to 110 FPS when you swap out the Intel CPU for a Ryzen CPU, while the RX480 performs the same on both.
That's a 50% drop in performance, and points to the issue being the GPU driver, not the game.

I really hope it's something that NVIDIA just have to optimize their driver for, and not an architectural difference between the two causing this.
 

thelastword

Banned
The top R3 model may have the same base/turbo clock as R7 1800X and R5 1600X and 16Mb L3 cache just to have an oddly balanced 4c4t model that at first may seem like a good deal. ^^
It would be awesome if it had 24Mb of L3 cache and overclocking capabilities to 5GHz...That would make a $129.00 chip compete with a 7700K for shits and giggles...Hey, but I wonder if they'd do something odd with R3 though...
 

dr_rus

Member
A lot of the explanations I've seen don't really present the issue very well, or people just post graphs without explaining what they represent.

It seems that, with AMD GPUs, Ryzen and Intel CPUs perform the same.
But with NVIDIA GPUs, Ryzen is performing much worse than Intel in some games.

3a8lgbc0lxqys6uw3.png


Ignore that the RX480 runs the game at 180 FPS vs 160 FPS on the GTX 1060.
The issue is that the 1060 drops from 160 FPS to 110 FPS when you swap out the Intel CPU for a Ryzen CPU, while the RX480 performs the same on both.
That's a 50% drop in performance, and points to the issue being the GPU driver, not the game.

I really hope it's something that NVIDIA just have to optimize their driver for, and not an architectural difference between the two causing this.

Architectural difference in what exactly?
 

Doesn't GTA V have some weird thing where it actually goes slower if you have more than 4 cores? I remember GamersNexus had a video on it.

This is why you test and showcase the minimum frame-rates.

I've witnessed this in other reviews and also have a first hand experience myself, interestingly this issue arises because they're pushing over 100 fps. I'm not sure if it's something introduced in a recent update or not as I hadn't heard about this or experienced it until recently.

I've witnessed this to a lesser extent on my i7 4790K at 4.7GHz, I was pushing over 150 fps IIRC and it would get slight stutters. I don't recall seeing this happen until recently so I'm wondering if it was introduced in a recent update for the game or even drivers. Or perhaps I hadn't tested it at these frame-rates before.

I've played 100s of hours of this game, I can't even remember anymore lol.

Here's an investigation video from Gamer's Nexus - GTA i5 Testing Mystery: Better CPU = More Stuttering
 

Datschge

Member
What architectural difference would explain the comparative results between 480 and 1060 in a DX9 game running on different CPUs?
So you were referring to the GPU hardware and say (for that DX9 game) 480 is a better match for Ryzen and there's nothing Nvidia can do about that?
 

dr_rus

Member
So you were referring to the GPU hardware and say (for that DX9 game) 480 is a better match for Ryzen and there's nothing Nvidia can do about that?

I wasn't referring to anything, I asked a question about this:

I really hope it's something that NVIDIA just have to optimize their driver for, and not an architectural difference between the two causing this.

I can't think of any architectural difference between 480 and 1060 which may cause this. Can you?
 

Datschge

Member
I can't think of any architectural difference between 480 and 1060 which may cause this. Can you?
No, if at all it is the driver being limited by single thread performance (possibly combined with inter-CCX thread hopping). Which means there is clear room for driver optimization on Ryzen in 1060's case. I guess we agreed all along then, thanks for the clarification.
 

Paragon

Member
Ran into my first real problem with the 1700X today in Bayonetta:Overclocking the memory to 3596MT/s only boosted the minimum by 4-5 FPS. Same thing as I've seen in every other game so far.
Restricting the game to 4c/8t or 4c/4t did nothing to help.


UPDATE: I have found the cause of the low performance: having HPET enabled.
Details (including fix) in this post.

I can't think of any architectural difference between 480 and 1060 which may cause this. Can you?
I meant their driver architecture/design rather than the GPUs. Basically, that it might not be a simple fix.
 

Seronei

Member
Ran into my first real problem with the 1700X today in Bayonetta:Overclocking the memory to 3596MT/s only boosted the minimum by 4-5 FPS. Same thing as I've seen in every other game so far.
Restricting the game to 4c/8t or 4c/4t did nothing to help.

I meant their driver architecture/design rather than the GPUs. Basically, that it might not be a simple fix.
That's really weird, there's no way that should happen. Even if it was completely single threaded it should have basically the same FPS. Would be nice to see a more benchmarks of it.

I think you mixed up the links though.
 
Ran into my first real problem with the 1700X today in Bayonetta:Overclocking the memory to 3596MT/s only boosted the minimum by 4-5 FPS. Same thing as I've seen in every other game so far.
Restricting the game to 4c/8t or 4c/4t did nothing to help.

I meant their driver architecture/design rather than the GPUs. Basically, that it might not be a simple fix.
Bayonetta has a weird issue with AMD CPUs in general, so it might need a patch before it works perfectly on Ryzen.
 

Paragon

Member
Bayonetta has a weird issue with AMD CPUs in general, so it might need a patch before it works perfectly on Ryzen.
This is exactly what I was hoping would not happen with Ryzen.
I thought that most of the issues with certain games on AMD's FX CPUs were related to their slow performance/IPC, not something that would carry forward to Ryzen.

I wonder how performance is with AMD GPUs; if it's the game or the NVIDIA drivers at fault.
A 7 year old game like Bayonetta should be completely locked to 60 on a PC like this.

Is that a general issue or just specific scenes? Here is a video from the start of the game using R5 1600 + RX470 which aside cutscenes and loading screens doesn't appear to drop below 59 fps. So possibly an Nvidia driver specific issue again?
The game was locked to 60 until I reached that area.
 

dr_rus

Member
I meant their driver architecture/design rather than the GPUs. Basically, that it might not be a simple fix.
I find it more likely that they just won't bother as running games in 720p or even DX9 games in 1080p at >100 fps already is hardly the focus for their optimization efforts.
 

IC5

Member
https://m.hardocp.com/article/2017/04/11/amd_ryzen_5_1600_1400_cpu_review/

The power usage is quite high after overclocking. (their results are Ryzen at 4ghz and Intel at 5ghz). If you are an enthusiast gamer who needs to keep their monthly bills as low as possible, you might want to go with Intel.

If you dont think power usage matters----moving my girlfriend from a final revision slim PS3 to a Blu-ray player, for her F.R.I.E.N.D.S.-on-in-the-background habit, dropped our electric bill at least $15
* we are religious about keeping lights off and unplugging electronics when not in use. Only heat rooms as needed.
 
https://m.hardocp.com/article/2017/0...0%2C3285887976

The power usage is quite high after overclocking. (their results are Ryzen at 4ghz and Intel at 5ghz). If you are an enthusiast gamer who needs to keep their monthly bills as low as possible, you might want to go with Intel.

If you dont think power usage matters----moving my girlfriend from a final revision slim PS3 to a Blu-ray player, for her F.R.I.E.N.D.S.-on-in-the-background habit, dropped our electric bill at least $15
If you've got P-state overclocking, you can drop the clocks and power usage down to 2.2GHz or lower.
 

IC5

Member
If you've got P-state overclocking, you can drop the clocks and power usage down to 2.2GHz or lower.
Right. But I'm saying if you game a few hours every day and plan to do it all maxed out: You might want to go with Intel quad cores, due to power savings. And therefore, lower monthly bill.
 

Seronei

Member
Right. But I'm saying if you game a few hours every day and plan to do it all maxed out: You might want to go with Intel, due to power savings. And therefore, lower monthly bill.

Ryzen will pull significantly less power while gaming than in that test. You really can't reach that conclusion from a gaming perspective.

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9NLzkvNjYxNzYxL29yaWdpbmFsL0ltYWdlNy5wbmc=
 

Datschge

Member
https://m.hardocp.com/article/2017/...400_cpu_review/?_e_pi_=7,PAGE_ID10,3285887976

The power usage is quite high after overclocking. (their results are Ryzen at 4ghz and Intel at 5ghz). If you are an enthusiast gamer who needs to keep their monthly bills as low as possible, you might want to go with Intel.

If you dont think power usage matters----moving my girlfriend from a final revision slim PS3 to a Blu-ray player, for her F.R.I.E.N.D.S.-on-in-the-background habit, dropped our electric bill at least $15
Link doesn't work.

It's known from the beginning that overclocking disables all the different forms of turbo on Ryzen, essentially putting all cores into a permanent turbo mode. This combined with binning for any model below the top model let's power usage grow exponentially. If you are conscious of power consumption it's better to stay on stock speed (and respectively pick the model accordingly) as the single core, all core and XFR turbo can work its magic at far lower power consumption. Be sure to also use AMD's Ryzens specific Balanced power plan that enables power saving while (unlike the standard Balanced profile) mostly reaching the performance of the standard High Performance profile.
 

IC5

Member
Link doesn't work.

It's known from the beginning that overclocking disables all the different forms of turbo on Ryzen, essentially putting all cores into a permanent turbo mode. This combined with binning for any model below the top model let's power usage grow exponentially. If you are conscious of power consumption it's better to stay on stock speed (and respectively pick the model accordingly) as the single core, all core and XFR turbo can work its magic at far lower power consumption. Be sure to also use AMD's Ryzens specific Balanced power plan that enables power saving while (unlike the standard Balanced profile) mostly reaching the performance of the standard High Performance profile.
Yeah. I'm talking about the enthusiast gamer who wants to run their games with their hardware maxed.
Link updated. I must have fudged it while editing that post.
 

Sinistral

Member
Yeah. I'm talking about the enthusiast gamer who wants to run their games with their hardware maxed.
Link updated. I must have fudged it while editing that post.

That gamer probably wont mind a few extra dollars a year for electricity...
 

Seronei

Member
Yeah. I'm talking about the enthusiast gamer who wants to run their games with their hardware maxed.
Link updated. I must have fudged it while editing that post.

If the game runs all threads maxed on a R7 at 4ghz it will destroy the 7700k in performance. If you want to save power from that just cap the FPS lol.
 

IC5

Member
Ryzen will pull significantly less power while gaming than in that test. You really can't reach that conclusion from a gaming perspective.

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9NLzkvNjYxNzYxL29yaWdpbmFsL0ltYWdlNy5wbmc=
Well, these dont show power usage after over locking. I guess wait until probably next week, when hardocp posts their gaming specific article.
But based on the power scaling shown by them with handbrake, I bet we see a similar separation while gaming. Albeit with lower overall power usage from both companies, due to lower average CPU load, while gaming.
 

Seronei

Member
Well, these dont show power usage after over locking. I guess wait until probably next week, when hardocp posts their gaming specific article.
But based on the power scaling shown by them with handbrake, I bet we see a similar separation while gaming. Albeit with lower overall power usage from both companies, due to lower average CPU load, while gaming.

The difference will be much much smaller, almost no games use 16 threads but many will use all 8 of an i7. While that benchmark does.
 
https://m.hardocp.com/article/2017/04/11/amd_ryzen_5_1600_1400_cpu_review/

The power usage is quite high after overclocking. (their results are Ryzen at 4ghz and Intel at 5ghz). If you are an enthusiast gamer who needs to keep their monthly bills as low as possible, you might want to go with Intel.

If you dont think power usage matters----moving my girlfriend from a final revision slim PS3 to a Blu-ray player, for her F.R.I.E.N.D.S.-on-in-the-background habit, dropped our electric bill at least $15
* we are religious about keeping lights off and unplugging electronics when not in use. Only heat rooms as needed.

I'm talking per month. There are a lot of people who can make splurge purchases, but need monthly costs as low as possible.

There is literally zero chance a 60 watt difference (slim PS3 uses ~80 watts for Bluray playback vs ~20 watts on a dedicated) saved you $15+ a month. Even at the most expensive rate of kWh and keeping it on 24 hours a day every single day, it would be an ~$8 difference.

You're either missing something or your power bill fluctuates way more than you think.
 

Paragon

Member
https://m.hardocp.com/article/2017/...400_cpu_review/?_e_pi_=7,PAGE_ID10,3285887976
The power usage is quite high after overclocking. (their results are Ryzen at 4ghz and Intel at 5ghz). If you are an enthusiast gamer who needs to keep their monthly bills as low as possible, you might want to go with Intel.
That link isn't working for me, but I assume you're referring to this:


https://www.hardocp.com/article/2017/04/11/amd_ryzen_5_1600_1400_cpu_review/5


Those numbers seem very high to me. Ryzen CPUs seem to require a lot more voltage for 4GHz than anything lower, which might be causing it.
With an R7-1700X overclocked to 3.9GHz (fastest it would hit at or below 1.35V) my UPS reports that the system is drawing 36W when idle, which is about half of my previous i5-2500K system - though I haven't moved all my hard drives into this PC yet.
Even using a fixed 3.9GHz clock rather than P-state overclocking, it seems to drop to 36W at idle. I thought that was supposed to disable power management.
Perhaps I don't have the P-state overclocking properly configured yet, but it does reduce the clockspeed when idle rather than staying at a fixed 3.90GHz using multiplier overclocking.

With 100% CPU load at a sustained 3.900GHz, I've never seen it draw more than 180W. I used ASUS' RealBench X264 encoding test to check this.
It pulled 200W for a fraction of a second at the start, but I'm guessing that's XFR. I had to set the P-state overclocking to 3.975GHz for it to actually stay at 3.900GHz under sustained load.

You have to remember that this is an 8-core CPU so it's doing a lot of work at 100% load.
I've never seen more than 70% load in games, and it's usually well below 50% on average.
Current-gen games are typically 30-50% while older games are 5-15%.

Obviously I've read the reviews, but I had not really done much benchmarking yet; just using it for work and playing games.
After those Bayonetta performance issues, I ran Cinebench R15 on both PCs just to confirm that everything was performing as it should - and it was.
  • The 2500K scored 138cb single-threaded and 438cb multi-threaded.
  • The R7-1700X scored 160cb single-threaded and 1712cb multi-threaded.
Fully loaded, it's doing 3.9x the work of that old system - which is incredible.
That's also approaching 2x the performance of a 7700K.
I guess it never really sank in until I had the two running side-by-side and the 1700X was able to finish both tests before the 2500K had finished the single core test.
Explains why things like video editing have been so much faster than I expected them to be.

But even tasks like video editing are not going to be running the CPU at 100% load.
It's only going to have a sustained 100% load when you're actually leaving it to handle things like encoding video files, not when you're working/gaming on the system.

If you dont think power usage matters----moving my girlfriend from a final revision slim PS3 to a Blu-ray player, for her F.R.I.E.N.D.S.-on-in-the-background habit, dropped our electric bill at least $15
* we are religious about keeping lights off and unplugging electronics when not in use. Only heat rooms as needed.
The PS3 Slim seems to draw 75W for Blu-ray playback, so that doesn't surprise me.
That kind of power consumption is ridiculous!
A dedicated Blu-ray player is probably below 5W now.
 

IC5

Member
I realize the blu-ray player/ps3 slim Vs. CPU power is very loose.

I was simply trying to point out that Intel seems to use a lot less power, when overclocked to the max, vs Ryzen OC'd to the max. And that hey, that can actually make a difference in your monthly bill.

But for a little more specifics:

I have seen two PS3 slim power usage charts. Both compared it to a Samsung BD-S3600, which is officially rated for 30w operation. And those two comparisons said it used about 20w for blu-ray playback.

I have a Sony BDP-S390. It is rated for 11w during operation.

In my case, F.R.I.E.N.D.S. Was 720p h.264 rips, running off the slim's internal hard drive. PS3 power usage, when reading h.264 rips, is an unknown. It's gotta be at least as much as blu-ray playback.

My BDP-S360 was pulling the same files off a compact 3tb USB hard drive, which only requires the power provided over USB, to operate.
 
Yeah I saw these benchmarks and was confused - what is going on here?

Speaking from the experience I had when I went through benchmarks many years ago, I got the distinct impression that NVidia's GeForce series is highly CPU dependent and the AMD Ati cards are CPU independent, meaning that the work is (almost) entirely GPU-bound aside from other system bandwidth limits. The advantage is that the Geforce benefits more from CPU and system upgrades, whereas AMD's do not (much), but sacrifices predictable output in doing so. Whether that is really an advantage or not is not something I want to get involved in (in the end I bought an AMD CPU in combination with a Geforce anyway), but that's the impression -which may be wrong- I got then, and it seems like that's what's happening here. The severe difference between the brands of CPU is probably a consequence of that 1-2-punch behavior, when it's waiting for 'punch' in 25% of calls made on the CPU (that never comes and leads to that frame being dropped), probably due to a driver issue (I doubt Nvidia has had the R7 to test with for very long) or missing instruction set between brands. The lack of the 'SSE' instruction set used to be an issue as well (a long time ago). The latter would be a problem, the former is one driver update away from being fixed. I'd say driver issue is the case here due the difference being almost a precise 25% though.

Alternatively, the hardware setup they used may simply not support this combination correctly yet.
Apologies if this seen as a drive-by post, I'm just looking at this myself for potential upgrades as well.
 

Sinistral

Member
I'm talking per month. There are a lot of people who can make splurge purchases, but need monthly costs as low as possible.

I'll ignore the questionable habits of people...

If we're sticking to this HardOCP comparison... then you have to compare other factors.

Just taking these at a surface level as they're not directly similar tasks.

$220 1600 will do this task in 70 seconds at 283W.
$350 7700K will do this task in 78 seconds at 215W.

A difference of $130 initially. You save on possible 68W when the system is running balls out. But the task will also take longer. How often, the length and your local KW/H rate will further mutate this difference.

So sure, I mean the general opinion up to this point is that for any gaming only PC the 7700K is king right now. So if you can only splurge once in a while, splurge for the 7700K.
 

Paragon

Member
CRITICALLY IMPORTANT INFORMATION: If you have a Ryzen system, you must not enable the HPET feature.
This must be why AMD are removing the requirement for HPET from the Ryzen Master software

Earlier today I posted about performance issues with the Bayonetta port - where a 7 year old game was dropping below 30 FPS in places for no apparent reason.
After hours of experimenting with all the suggested fixes and doing some testing of my own, I finally found what was causing this: the HPET feature.

I think that Ryzen Master was the first piece of software that I installed on this PC right after setting up Windows, and the current version of it will not run without HPET enabled.
After removing the Boot Configuration Data that Ryzen Master set, which forces HPET to be used, the game is now running at a flawless 60 FPS as it should.

To fix this, open up PowerShell with Admin privileges and enter the following command:
Code:
BCDEdit /deletevalue useplatformclock

You then have to restart your PC for this to take effect.
Use 'restart' and not 'shut down'.
 

Larogue

Member
I think 1600 is the best if you are fine with OCing it little bit. Otherwise either get 1600X and run it stock. Or 1500X if you wanna save some $$.
 
I think 1600 is the best if you are fine with OCing it little bit. Otherwise either get 1600X and run it stock. Or 1500X if you wanna save some $$.

Yep, especially if you plan to use the included cooling to get the 1600 upto ~3.7-3.8ghz. You can match or beat the performance of the 1600x and save $50 in the process.

It's the ultimate sweet spot gaming chip.
 
I realize the blu-ray player/ps3 slim Vs. CPU power is very loose.

I was simply trying to point out that Intel seems to use a lot less power, when overclocked to the max, vs Ryzen OC'd to the max. And that hey, that can actually make a difference in your monthly bill.

But for a little more specifics:

I have seen two PS3 slim power usage charts. Both compared it to a Samsung BD-S3600, which is officially rated for 30w operation. And those two comparisons said it used about 20w for blu-ray playback.

I have a Sony BDP-S390. It is rated for 11w during operation.

In my case, F.R.I.E.N.D.S. Was 720p h.264 rips, running off the slim's internal hard drive. PS3 power usage, when reading h.264 rips, is an unknown. It's gotta be at least as much as blu-ray playback.

My BDP-S360 was pulling the same files off a compact 3tb USB hard drive, which only requires the power provided over USB, to operate.

If you care about power consumption then you're not going to be running your CPU "overclocked to the max". Drop those Ryzen CPUs down to 3.7-3.8ghz and they become very competitive in performance per watt.
 
thinking about putting my planed r5 1600 build in an µATX case. do i have to consider thermal aspects somewhat like with mITX or will that be a non issue even with larger GPUs (like vega or something)?
 
Top Bottom