• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Ryzen Thread: Affordable Core Act

No surprise games like The Witcher 3, Battlefield 1 and Rise of the Tomb Raider which had the worst performance figures for Ryzen, show the largest gains (~12-18% or more) by simply running at 3000MHz or 3200MHz.

Even applications like Handbrake, which regularly fell short of Intel, has shown double-digit performance increases. With poor memory speeds the R7s still excelled at 7-zip. With higher speed that also has double-digit increases.


Seems Infinity Fabric speed (derived from higher RAM speed) continues to be crucial to overall performance, beyond CPU OC increases and very likely beyond any claims of critical flaws inherent in the CCX layout itself. The NB speed sensitivity sounds more like a successor to Phenom than Bulldozer, since BD tweaks didn't bear as much fruit as Phenom tweaks.


Too early to say anything conclusive on RAM and Fabric speed, optimisations, or the extent to which CCX penalties play a role so further testing is still needed.



-------



HT4U retested Ryzen with new UEFI updates for stability and other improvements since launch.

HT4U.net —— Processors: AMD Ryzen 7 Reloaded: R7 1700 to 1800X Tested Again


Memory Scaling.

DDR4 3200MHz vs 2133, 2400, 2666


Google Translate | German: https://www.ht4u.net/reviews/2017/a...]&prod[]=AMD+Ryzen+7+1800X+[8C/16T@DDR4-3200]


Google Translate | German:https://www.ht4u.net/reviews/2017/a...]&prod[]=AMD+Ryzen+7+1800X+[8C/16T@DDR4-3200]


Google Translate | German: https://www.ht4u.net/reviews/2017/a...]&prod[]=AMD+Ryzen+7+1800X+[8C/16T@DDR4-3200]


Google Translate | German: https://www.ht4u.net/reviews/2017/a...]&prod[]=AMD+Ryzen+7+1800X+[8C/16T@DDR4-3200]




laser-temps2yryc.png



"Temperatures - lie and truth:"
https://translate.googleusercontent...x6.php&usg=ALkJrhjrJBclns5Y4ycY6a1JOCNosWSJyA




That's not an argument I remember making at all. I know they're not going to overclock better.
Heh, I didn't mean to imply you had. That wasn't a straw man or anything. Only a random, semi-related BTW comment I had wanted to post sooner and not a response to anything you had mentioned.

I can't say I agree that Ryzen 7s don't seem like a good proposition for most workstations, even with the prevalence of lightly-threaded applications. It does seem to be less than optimal for your needs. Ultimately, that's entirely up to the user's workflow so for some it would not be a good match.
 
PC Perspective —— Ashes of the Singularity Gets Ryzen Performance Update

Legit Reviews —— AMD Ryzen Performance Update Released For Ashes of the Singularity

Tom's Hardware —— AMD Ryzen's First Game Optimization: 'Ashes Of The Singularity: Escalation,' Tested

HardOCP TV YouTube —— AMD Ryzen Optimizations in Ashes of the Singularity Oxide Engine



Tom's Hardware CPU Test
https://abload.de/img/xebi9.jpg
https://abload.de/img/muz3o.jpg
https://abload.de/img/22bq6.jpg
https://abload.de/img/9pxtx.jpg

Tom's Hardware GPU Test
https://abload.de/img/ngabn.jpg
https://abload.de/img/5xa2t.jpg
https://abload.de/img/piyen.jpg
https://abload.de/img/0azsq.jpg



Those results are pretty impressive as it represents a 22.7% gain at 4K resolutions, a massive 31.2% gain at 1440P and then a very nice 26.9% gain at 1080P. Keep in mind that we are testing with a stock clocked AMD Ryzen 7 1700 processor with DDR4 2933MHz memory with CL14 timings!

Here is what Stardock and Oxide CEO Brad Wardell had to say about the game update in AMD's press release:

”I've always been vocal about taking advantage of every ounce of performance the PC has to offer. That's why I'm a strong proponent of DirectX 12 and Vulkan because of the way these APIs allow us to access multiple CPU cores, and that's why the AMD Ryzen processor has so much potential," said Stardock and Oxide CEO Brad Wardell. ”As good as AMD Ryzen is right now – and it's remarkably fast – we've already seen that we can tweak games like Ashes of the Singularity to take even more advantage of its impressive core count and processing power. AMD Ryzen brings resources to the table that will change what people will come to expect from a PC gaming experience."​

The good news is that AMD and Oxide aren't done making Ryzen optimizations as they think there are still more things than can be done to squeeze even more performance out of the architecture with more real world tuning. Now that AMD has released Ryzen and game developers have plenty of chips to use they can look at the data from their instruction sequences and make some simple code changes that offer substantial performance gains. Game developers are able to use performance profilers like Intel VTune Amplifier 2017 to get the most performance possible from their multi-threaded processors and AMD hasn't developed a tool like for Ryzen just yet. Hopefully, the performance numbers that you see here are just the start and that more game titles will be optimized. This going to be a huge task for the AMD developer relations, but we know the North American Developer Relations team is being pretty aggressive




These are substantial performance improvements with the new engine code! At both 2400 MHz and 3200 MHz memory speeds, and at both High and Extreme presets in the game (all running in DX12 for what that's worth), the gaming performance on the GPU-centric is improved. At the High preset (which is the setting that AMD used in its performance data for the press release), we see a 31% jump in performance when running at the higher memory speed and a 22% improvement with the lower speed memory. Even when running at the more GPU-bottlenecked state of the Extreme preset, that performance improvement for the Ryzen processors with the latest Ashes patch is 17-20%!

So what exactly is happening to the engine with v26118? I haven't had a chance to have an in-depth conversation with anyone at AMD or Oxide yet on the subject, but at a high level, I was told that this is what happens when instructions and sequences are analyzed for an architecture specifically. ”For basically 5 years", I was told, Oxide and other developers have dedicated their time to ”instruction traces and analysis to maximize Intel performance" which helps to eliminate poor instruction setup. After spending some time with Ryzen and the necessary debug tools (and some AMD engineers), they were able to improve performance on Ryzen without adversely affecting Intel parts.
 
Those results are pretty impressive as it represents a 22.7% gain at 4K resolutions, a massive 31.2% gain at 1440P and then a very nice 26.9% gain at 1080P. Keep in mind that we are testing with a stock clocked AMD Ryzen 7 1700 processor with DDR4 2933MHz memory with CL14 timings!


B-but the CCX design... :p

That's massive. Anyone with any sense should have known that the R7's had way more to give when in some cases the CPU load was 50% during some game benches. Once you wack in 3200Mhz+ ram you'd get performance indistinguishable from a 7700K. Who'd have thought?
 

Renekton

Member
Well Oxide specifically coded their game around that CCX design, we're not sure if many other developers will do the same (I trust DICE will though).

BTW is isolating to one CCX the ONLY solution around this Ryzen issue? I kinda want games to use all available threads :p

edit2:
I was told that this is what happens when instructions and sequences are analyzed for an architecture specifically. ”For basically 5 years", I was told, Oxide and other developers have dedicated their time to ”instruction traces and analysis to maximize Intel performance" which helps to eliminate poor instruction setup. After spending some time with Ryzen and the necessary debug tools (and some AMD engineers), they were able to improve performance on Ryzen without adversely affecting Intel parts.
 

Caayn

Member
Well Oxide specifically coded their game around that CCX design, we're not sure if many other developers will do the same (I trust DICE will though).
Luckily it's done on an engine level. So the engine itself needs to be optimized for it.

If Epic and Dice (for example) will implement it Unreal and Frostbite should perform better on Ryzen as well.
 
Well Oxide specifically coded their game around that CCX design, we're not sure if many other developers will do the same (I trust DICE will though).

BTW is isolating to one CCX the ONLY solution around this Ryzen issue? I kinda want games to use all available threads :p

Some were trying to imply that there was no way around the CPU Complex design being detrimental to performance. 400-hours of work to optimize for Ryzen is all it took apparently. That's like a team of 5 people working over a few weeks. And bang, you get 20-30% increase in fps, matching Intel's HEDT and with more performance to come. Of course, if devs don't bother with optimizations you get lower performance in some games, but why would they not if the optimizations are fairly trivial.
 

Paragon

Member
B-but the CCX design... :p

That's massive. Anyone with any sense should have known that the R7's had way more to give when in some cases the CPU load was 50% during some game benches. Once you wack in 3200Mhz+ ram you'd get performance indistinguishable from a 7700K. Who'd have thought?

It's almost like this is exactly what people have been saying; that the problem with the CCX design is that many applications/games will need to optimize specifically for Ryzen CPUs rather than just making an application/game that is well multi-threaded.
The problem with an architecture which requires this is that not every developer is going to do it.
Hopefully at least Frostbite/Unreal/Unity will all be optimized for it, so games a year or two from now might start shipping with good support.

I found a guide with details on base clock overclocking for the Crosshair VI, which states that the PCIe link speed will operate at Gen 3 speeds for 85–104.85 MHz, Gen 2 speeds from 105–144.8 MHz, and Gen 1 speeds at 145+ MHz.
So the fastest you can run the memory at while still keeping PCIe Gen 3 speeds is ~3050MT/s right now, since the 32x multiplier is still unstable.
 

ezodagrom

Member
Some were trying to imply that there was no way around the CPU Complex design being detrimental to performance. 400-hours of work to optimize for Ryzen is all it took apparently. That's like a team of 5 people working over a few weeks. And bang, you get 20-30% increase in fps, matching Intel's HEDT and with more performance to come. Of course, if devs don't bother with optimizations you get lower performance in some games, but why would they not if the optimizations are fairly trivial.
There's publishers that have very poor post release support. Not gonna be getting Ryzen patches from these.
 

spwolf

Member
B-but the CCX design... :p

That's massive. Anyone with any sense should have known that the R7's had way more to give when in some cases the CPU load was 50% during some game benches. Once you wack in 3200Mhz+ ram you'd get performance indistinguishable from a 7700K. Who'd have thought?

sounds very good for people buying ryzen.
 

Caayn

Member
It's almost like this is exactly what people have been saying; that the problem with the CCX design is that many applications/games will need to optimize specifically for Ryzen CPUs rather than just making an application/game that is well multi-threaded.
The problem with an architecture which requires this is that not every developer is going to do it.
Hopefully at least Frostbite/Unreal/Unity will all be optimized for it, so games a year or two from now might start shipping with good support.
It's only natural to be honest and I wouldn't consider it a problem with the CCX design. Intel has benefited from sticking with a similar design for a long time which means that compiler and engines won't need to be adjusted to take advantage of Intel's design with each new Intel CPU release.

I think that AMD made a smart decision to start from the ground-up even if that meant that they lose previously done optimization in software.
 

spwolf

Member
It's only natural to be honest and I wouldn't consider it a problem with the CCX design. Intel has benefited from sticking with a similar design for a long time which means that compiler and engines won't need to be adjusted to take advantage of Intel's design with each new Intel CPU release.

I think that AMD made a smart decision to start from the ground-up even if that meant that they lose previously done optimization in software.

funny thing about compilers, usually Intel's is fastest one, and it has code to gimp AMD even not execute on it... but it can be patched out. I wonder how many people use IC vs GCC vs Clang.
 

masterkajo

Member
Having read some recent news about ryzen performing much better with high speed memory (frequencies beyond 3000Mhz) I am now really firm on getting a R5 1600 (if it performs as expected). Corsair Vengeance 2x8GB 3200 Ram kit (€140) + R5 1600 (€220) + B350 Motherboard (€100) gives a solid foundation for the future for the low price of (~€450).
 
It's almost like this is exactly what people have been saying; that the problem with the CCX design is that many applications/games will need to optimize specifically for Ryzen CPUs rather than just making an application/game that is well multi-threaded.
The problem with an architecture which requires this is that not every developer is going to do it.
Hopefully at least Frostbite/Unreal/Unity will all be optimized for it, so games a year or two from now might start shipping with good support.

I found a guide with details on base clock overclocking for the Crosshair VI, which states that the PCIe link speed will operate at Gen 3 speeds for 85–104.85 MHz, Gen 2 speeds from 105–144.8 MHz, and Gen 1 speeds at 145+ MHz.
So the fastest you can run the memory at while still keeping PCIe Gen 3 speeds is ~3050MT/s right now, since the 32x multiplier is still unstable.

Well I know Oxide have this uncomfortably close relationship with AMD these days but they are a tiny dev house and if they can do it so quickly, the larger devs should have no problem in terms of manpower at least.

Games from the biggest devs like DICE and id are all these review sites seem to bench as well, aside from a small number of exceptions, so it's not going to be critical if Tides of Numenera isn't optimized for Ryzen because:

1. These types of games are typically undemanding so it'll run in the 100s of fps regardless.

2. Unfortunately these smaller games never get benched so many won't know how worse comparatively it runs because of being un-optimized for the Zeppelin arch.

So all AMD have to do - and this is exactly what they're doing - is reach out to the large players in the PC dev space and they'll go a long way to solving this negative image.

You know I know you're probably getting a Ryzen CPU so I'd imagine you'll be punching the air in jubilation at this news cos 20-30% gains is pretty joyous.
 

Datschge

Member
BTW is isolating to one CCX the ONLY solution around this Ryzen issue?
No, disabling the Windows scheduler's thread migration by using CPU affinity is the only solution (unless Microsoft ever gets around making the scheduler saner). Inter-CCX latency is not that bad unless you are forced to repeatedly migrate/rebuild caches due to a forced thread migration "just because".
 

Steel

Banned
Some were trying to imply that there was no way around the CPU Complex design being detrimental to performance. 400-hours of work to optimize for Ryzen is all it took apparently. That's like a team of 5 people working over a few weeks. And bang, you get 20-30% increase in fps, matching Intel's HEDT and with more performance to come. Of course, if devs don't bother with optimizations you get lower performance in some games, but why would they not if the optimizations are fairly trivial.

The people saying that the CCX split would always be a problem were also saying that there were probably ways to mitigate the issue. That doesn't really diminish that the R7s are good and a great leap for AMD.

so do we have a thread for recommended ram/mobo for 1600x? Or should we ask here?

We don't, but you can ask here. I'd look into what the board manfacturer for the board you've chosen suggests, but I've been getting by with G Skill Ripjaws V @ 3200 mhz on my MSI board. Could only overclock it to 3200 mhz through Ryzen Master, any changes in the bios don't take. Well, I had to adjust the voltage in the bios, because I couldn't in Ryzen Master. it's a bit messy.
 

tuxfool

Banned
Actually, the issue in that Ashes of the Singularity benchmark is that they were using instructions that bypassed cache access by using instructions that are specifically designed not to be reused again.

https://twitter.com/FioraAeterna/status/847472309010964481

For an explanation of how those instructions work:

http://stackoverflow.com/questions/37070/what-is-the-meaning-of-non-temporal-memory-accesses-in-x86

Used properly it could be faster, but like a lot of things it is never universally applicable.
 

Datschge

Member
Actually, the issue in that Ashes of the Singularity benchmark is that they were using instructions that bypassed cache access by using instructions that are specifically designed not to be reused again.
If that's the only change it's interesting that Intel's 6900K barely shows an improvement in the above benchmark. Is that an indication that Intel still caches the data despite the instruction not to? Or is this about cache writes, not reads?
Edit: Answering myself, storing data in memory bypassing the cache hierarchy. E.g. MOVNTDQ: http://www.felixcloutier.com/x86/MOVNTDQ.html Saw several mentions that Intel's compiler without intervention may use and not use it depending on what it thinks gives better performance.
 

Paragon

Member
AMD Ryzen Community Update 2: https://community.amd.com/community/gaming/blog/2017/03/30/amd-ryzen-community-update-2

We will soon be distributing AGESA point release 1.0.0.4 to our motherboard partners. We expect BIOSes based on this AGESA to start hitting the public in early April, though specific dates will depend on the schedules and QA practices of your motherboard vendor.

BIOSes based on this new code will have four important improvements for you
  1. We have reduced DRAM latency by approximately 6ns. This can result in higher performance for latency-sensitive applications.
  2. We resolved a condition where an unusual FMA3 code sequence could cause a system hang.
  3. We resolved the “overclock sleep bug” where an incorrect CPU frequency could be reported after resuming from S3 sleep.
  4. AMD Ryzen™ Master no longer requires the High-Precision Event Timer (HPET).
We will continue to update you on future AGESA releases when they’re complete, and we’re already working hard to bring you a May release that focuses on overclocked DDR4 memory.
 

Kambing

Member
I am going to be downloading CEMU and see how Ryzen fares there -- will make for a fun evening. At the very least, I hope I will get better performance than my 2500k. Hope to share more soon!
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
This doesn't make any sense, you are literally the person that should be buying into HEDT. Why build two machines just to have AMD when you can build a single 6900K machine which breathes fire in both games and workstation applications, especially if you're going to overclock it until it cries? What do you specifically need ECC for?

If you really do need ECC, there's also this:
https://www.asus.com/Motherboards/X99E_WS/specifications/

X99 with ECC support. Put a 6900K in it. Have fun!
That's not how it works, unfortunately. You need a Xeon for that ECC to work. You can thank Intel for that.

funny thing about compilers, usually Intel's is fastest one, and it has code to gimp AMD even not execute on it... but it can be patched out. I wonder how many people use IC vs GCC vs Clang.
Using all three here.
 

dr_rus

Member
B-but the CCX design... :p

That's massive. Anyone with any sense should have known that the R7's had way more to give when in some cases the CPU load was 50% during some game benches. Once you wack in 3200Mhz+ ram you'd get performance indistinguishable from a 7700K. Who'd have thought?

B-but this is exactly what you should expect from a CCX design - a lot of performance can be gained via software optimization on such NUMA system. AotS example is perfect in illustrating this.

More AotS testing from PCGH btw: Ashes of the Singularity: Was bringt der AMD-Ryzen-Patch? Spiel vs. integrierter Benchmark

ybrc.png


zbrc.png
 
B-but this is exactly what you should expect from a CCX design - a lot of performance can be gained via software optimization on such NUMA system. AotS example is perfect in illustrating this.

You must think people have a short memory or something. You've completely changed your tune in light of AotS being optimised. This is you on page 36 of this thread:

Sigh. It's not about how you spread the threads, it's about THE FACT that threads of some nature will ALWAYS need to access some data in a "far" L3 of the other CCX. This is a hardware problem, it can't be fixed with OS or anything else, only worked around to some degree. Also - where can we see the results of Linux running the same software on Ryzen better than Windows? Even AMD has said already that there is no issues in how Windows 10 schedule work on Ryzen.

These optimisations prove that the CPU Complex design isn't a 'hardware problem' like you and others tried to dramatically claim it was. It's not a problem at all. If a small amount of work from a little dev team like Oxide brings 20-30% performance increases initially, with more performance to come, then this hasn't just been 'worked around', it's completely vanished as an issue as the fps are indistinguishable from the Intel HEDTs.

The problem with your rhetoric in this thread is you were trying to whip up negativity when I think you knew full-well that lack of software optimisation (because Zen is a brand new architecture on a brand new platform - give it a bloody chance longer than 1 week) does not = a 'hardware problem' which causes a small loss of FPS in games (when tested with a Titan X-level card and at 1080p resolution, let's not forget that either).

Now you turn up again and try to imply you knew the CCX design 'flaw' would be circumvented all along. Ha ha! That's not the sentiment I got from your previous posts, although to be fair, you weren't the worst offender when it came to all this.
 
Dota 2 was updated (likely CPU affinity stuff):

- Fixed the display of particles in the portrait window.
- Fixed Shadow Fiend's Demon Eater (Arcana) steaming while in the river.
- Fixed Juggernaut's Bladeform Legacy - Origins style hero icons for pre-game and the courier button.
- Improved threading configuration for AMD Ryzen processors.
- Workshop: Increased head slot minimum budget for several heroes.

http://store.steampowered.com/news/28296/

Players report gains of 20-25%.


Dota 2 Pre-patch: https://arstechnica.com/gadgets/2017/03/amd-ryzen-review/3/



~23% increase — Dota 2 pre-patch -vs- post-patch: https://www.chiphell.com/thread-1717769-1-1.html




AMD —— AMD Ryzen™ Community Update #2

Boosting minimum framerates in DOTA™ 2

Many gamers know that an intense battle in DOTA 2 can be surprisingly demanding, even on powerful hardware. But DOTA has an interesting twist: competitive gamers often tell us that the minimum framerate is what matters more than anything in life or death situations. Keeping that minimum framerate high and steady keeps the game smooth, minimizes input latency, and allows players to better stay abreast of every little change in the battle.


As part of our ongoing 1080p optimization efforts for the AMD Ryzen™ processor, we identified some fast changes that could be made within the code of DOTA to increase minimum framerates. In fact, those changes are already live on Steam as of the March 20 update!


We still wanted to show you the results, so we did a little A:B test with a high-intensity scene developed with the assistance of our friends in the Evil Geniuses eSports team. The results? +15% greater minimum framerates on the AMD Ryzen™ 7 1800X processor2, which lowers input latency by around 1.7ms.


Not bad for some quick wrenching under the hood, and we’re continuing to explore additional optimization opportunities in this title.


pastedimage_48cbl9.png


System configuration: AMD Ryzen™ 7 1800X Processor, 2x8GB DDR4-2933 (15-17-17-35), GeForce GTX 1080 (378.92 driver), Gigabyte GA-AX370-Gaming5, Windows® 10 x64 build 1607, 1920x1080 resolution

tournament-optimized quality settings - https://community.amd.com/servlet/JiveServlet/download/1455-69711/DOTA2FPSTesting.pdf
 

Locuza

Member
What's very interesting from the community update Nr. 2:
Let’s talk BIOS updates

Finally, we wanted to share with you our most recent work on the AMD Generic Encapsulated Software Architecture for AMD Ryzen™ processors. We call it the AGESA™ for short.

As a brief primer, the AGESA is responsible for initializing AMD x86-64 processors during boot time, acting as something of a “nucleus” for the BIOS updates you receive for your motherboard. Motherboard vendors take the baseline capabilities of our AGESA releases and build on that infrastructure to create the files you download and flash.

We will soon be distributing AGESA point release 1.0.0.4 to our motherboard partners. We expect BIOSes based on this AGESA to start hitting the public in early April, though specific dates will depend on the schedules and QA practices of your motherboard vendor.

BIOSes based on this new code will have four important improvements for you

- We have reduced DRAM latency by approximately 6ns. This can result in higher performance for latency-sensitive applications.
- We resolved a condition where an unusual FMA3 code sequence could cause a system hang.
- We resolved the “overclock sleep bug” where an incorrect CPU frequency could be reported after resuming from S3 sleep.
- AMD Ryzen™ Master no longer requires the High-Precision Event Timer (HPET).


We will continue to update you on future AGESA releases when they’re complete, and we’re already working hard to bring you a May release that focuses on overclocked DDR4 memory.
 

Durante

Member
It's almost like this is exactly what people have been saying; that the problem with the CCX design is that many applications/games will need to optimize specifically for Ryzen CPUs rather than just making an application/game that is well multi-threaded.
The problem with an architecture which requires this is that not every developer is going to do it.
Hopefully at least Frostbite/Unreal/Unity will all be optimized for it, so games a year or two from now might start shipping with good support.
Exactly.

funny thing about compilers, usually Intel's is fastest one, and it has code to gimp AMD even not execute on it... but it can be patched out. I wonder how many people use IC vs GCC vs Clang.
99% of all Windows games aren't compiled with any of those 3, but rather MSVC.

Anyway, a standard compiler doesn't do any thread scheduling, so it's unrelated to that issue.
 
·feist·;233094445 said:
Dota 2 Pre-patch: https://arstechnica.com/gadgets/2017/03/amd-ryzen-review/3/




~23% increase — Dota 2 pre-patch -vs- post-patch: https://www.chiphell.com/thread-1717769-1-1.html

As part of our ongoing 1080p optimization efforts for the AMD Ryzen™ processor, we identified some fast changes that could be made within the code of DOTA to increase minimum framerates. In fact, those changes are already live on Steam as of the March 20 update!

Brilliant that some quick code changes results in such large performance increases.

It's almost like this is exactly what people have been saying; that the problem with the CCX design is that many applications/games will need to optimize specifically for Ryzen CPUs rather than just making an application/game that is well multi-threaded.
The problem with an architecture which requires this is that not every developer is going to do it.
Hopefully at least Frostbite/Unreal/Unity will all be optimized for it, so games a year or two from now might start shipping with good support.

Exactly.

.

Exactly what? That it will take 1-2 years for games to start shipping with optimised support for Ryzen? That won't be the case. It looks like the large devs will have these optimisations out in several weeks for existing games. As for games yet to be released, optimisations will be there day 1 not 1-2 years.

If you mean optimisations to make specific games run faster on Ryzen where it has a core/thread advantage, then that will take a long time.
 

dr_rus

Member
You must think people have a short memory or something. You've completely changed your tune in light of AotS being optimised. This is you on page 36 of this thread:
You should learn to read before saying what I've changed and what I didn't.

These optimisations prove that the CPU Complex design isn't a 'hardware problem' like you and others tried to dramatically claim it was. It's not a problem at all. If a small amount of work from a little dev team like Oxide brings 20-30% performance increases initially, with more performance to come, then this hasn't just been 'worked around', it's completely vanished as an issue as the fps are indistinguishable from the Intel HEDTs.
These optimizations prove the exact opposite of what you're saying - that the CCX is a hardware problem which can be worked around to some degree by doing s/w optimizations specifically for the h/w in question. Which is what I've been saying all the time.

The problem with your rhetoric in this thread
There is no "problem" with my "rhetoric" in this thread. The problem is completely on your side.
 

Parsnip

Member
Any news/benchmarks from the R5?

Beyond the simulated/speculative benches, I haven't seen any.

I'm waiting for those as well.
If the simulated benches are anything to go by, R5's have the potential to be quite interesting for their price segment. And by the time those are out, some of the growing pains with bios' etc should be ironed out.
 
You should learn to read before saying what I've changed and what I didn't.


These optimizations prove the exact opposite of what you're saying - that the CCX is a hardware problem which can be worked around to some degree by doing s/w optimizations specifically for the h/w in question. Which is what I've been saying all the time.
.

The CCX is integral to Ryzen's design, so you're basically saying Ryzen itself is one big 'hardware problem', even though some trivial and initial software optimisations after a few weeks completely negate any performance losses compared to the 6900K. In fact the 1800X is slightly faster than Intel's 8-core HEDT according to PCGH, so according to you, we must also assume the 6900K has a hardware problem as well.
 

shark sandwich

tenuously links anime, pedophile and incels
I returned my Asus Prime B350M-A. Newest BIOS still didn't fix any of my problems:
- weird CPU voltage readings sometimes over 1.5v even though it was set to 1.350 in BIOS
- wrong CPU temp reported, which causes the fans to go berserk unless set to silent mode (AMD clarified that 1700X and 1800X report temps 20c higher than actual and that some motherboards haven't yet corrected for this in the bios)
- doesn't correctly report my memory's XMP settings and defaults to 2133 MHz, have to manually set speed/timings (and yes I'm using RAM from their QVL)
- "EZ Update" is supposed to find all the latest drivers/BIOS but it fails to recognize the motherboard
- it also includes a Windows-based BIOS flashing utility. DONT USE THIS. The first 1/3 took about 5 minutes, then it took FOUR HOURS to complete. And it froze when it completed. Luckily it did update successfully

I returned that POS and got the MSI B350 Mortar instead. I am relieved to say that I have no problems with this board:
- correct temperatures reported
- no weird voltage readings
- BIOS lets you use XMP settings. These were correctly detected and applied using the 1.1 BIOS, despite my RAM not being on their QVL
- bonus: has a whopping 2 case fan headers vs 1 on the Prime

I'd definitely recommend the Mortar for anybody looking for a mATX board. Easily worth the extra $5-$10 over the Prime.
 

Sinistral

Member
I feel the same way about the Asus Prime B350M-A shark sandwich. The lack of beta bios, and molasses release schedule for this board is disappointing to see from Asus. I'm willing to wait for the May update to see if the ram situation actually improves though. Which XMP settings and with what ram and size were you able to use on the Motar? I've been eyeing the Asrock AB350M Pro, just cause the placement of the M.2 drive.

its not a problem, its a compromise. Performance is not bad with it, it just could be better.

Exactly how I see it. The CCX architecture is a compromise. People calling it a problem are being hyperbolic. Any new architecture or hardware will require optimizing, it's how it was always done before Intels dominance in the CPU sector, it's done still today on GPUs.

The CCX design is what will allow AMD, the much smaller company to be more agile in deployment of the architecture to various markets. At competitive performance metrics, and market shattering prices.

It's meant to easily scale down to 1 CCX, all the way up to 8 CCX so far. While having the possibility to be deployed as a capable APU. Compare it to the compromise in how Intel has to go about it when they release various amounts of core configurations.

Compromise. It's the design decision AMD is going with, and it will take, I think until Zen2 to really see how the markets react, and how eco systems evolve to really pass solid judgement.
 
Not sure if anyone has seen this but it's pretty intresting. It's about the relationship between Ryzen performance and the video card drivers on Nvidia vs AMD cards. This guy ran some benchmarks and tests and believes that the lack of performance on Ryzen is due to an Nvidia driver bottleneck.

https://www.youtube.com/watch?v=0tfTZjugDeg


AMD GPU DX11
R7 1800X = 50FPS
i7 7700K = 55FPS

10% difference.

NVIDIA GPU DX11
R7 1800X = 60FPS
i7 7700K = 80FPS

33% difference.

I think all review sites benched with an Nvidia GPU naturally. I would like to see Fury X benches with Ryzen and Intel CPUs.
 

Paragon

Member
Hardware Canucks have a "deep dive" article on ECC support: http://www.hardwarecanucks.com/foru...ws/75030-ecc-memory-amds-ryzen-deep-dive.html
The short version is that ECC appears to be working correctly and correcting errors.
However only ASRock boards seem to have any UEFI options relating to ECC right now, and operating systems are only logging memory errors. They do not halt if a multi-bit error is detected.

So it seems like it's better than not having ECC memory at all, but hardly ideal.
I wonder if ASRock or ASUS will release fully validated 'workstation' boards at some point, and hopefully Windows will be updated with better support for ECC on the AM4 platform.
No confirmation on what speeds will be achievable either; only speculation that you might not get full speed on the 2666MT/s kits right now.

- weird CPU voltage readings sometimes over 1.5v even though it was set to 1.350 in BIOS
That sounds like it could be load line calibration if you had it turned all the way up.
LLC is supposed to compensate for VDroop, where the voltage drops if the CPU is under heavy load. Set it too high, and it will push the voltage above your target instead of just compensating for the drop.

Not sure if anyone has seen this but it's pretty intresting. It's about the relationship between Ryzen performance and the video card drivers on Nvidia vs AMD cards. This guy ran some benchmarks and tests and believes that the lack of performance on Ryzen is due to an Nvidia driver bottleneck.

https://www.youtube.com/watch?v=0tfTZjugDeg

There seems like far too many variables which are unaccounted for here, and not enough testing to draw any real conclusions.

Do AMD's GPU drivers perform better on Ryzen than NVIDIA's, or is it that multi-GPU scales better in RotTR DX12?
Is this same level of performance scaling seen in other games?
What are the results when your tests are not GPU-bottlenecked?

He also says that a 30-35% difference in performance is unreasonable.
If we look at IPC, Intel was about 7% ahead if I remember correctly. Combine that with 25% higher clockspeeds and we're at 34% faster per-core performance.
 

shark sandwich

tenuously links anime, pedophile and incels
I feel the same way about the Asus Prime B350M-A shark sandwich. The lack of beta bios, and molasses release schedule for this board is disappointing to see from Asus. I'm willing to wait for the May update to see if the ram situation actually improves though. Which XMP settings and with what ram and size were you able to use on the Motar? I've been eyeing the Asrock AB350M Pro, just cause the placement of the M.2 drive.
I'm using a Corsair 2400 MHz 14-16-16-31 16GB dual-channel kit (sorry, don't remember the model #).

The latest Prime bios finally recognizes the correct XMP settings but there is no option to use them. Your only choices are Auto (which sets it to 2133) or setting them manually.

Mortar w/1.0 bios recognized the XMP settings but did not automatically use them when "A-XMP" was enabled (I could still set them manually). W/1.1 bios I was able to turn on the "A-XMP" setting and leave everything else see to Auto, and it set the speed/timings correctly.
 

V_Arnold

Member
He also says that a 30-35% difference in performance is unreasonable.
If we look at IPC, Intel was about 7% ahead if I remember correctly. Combine that with 25% higher clockspeeds and we're at 34% faster per-core performance.

In what world does core performance scale linearly with clockspeeds?
Can I just OC my FX-6300 from 3.5ghz to 4.5 and get a hefty ~32% increase? Sign me up on that plane!
 

dr_rus

Member
The CCX is integral to Ryzen's design, so you're basically saying Ryzen itself is one big 'hardware problem', even though some trivial and initial software optimisations after a few weeks completely negate any performance losses compared to the 6900K. In fact the 1800X is slightly faster than Intel's 8-core HEDT according to PCGH, so according to you, we must also assume the 6900K has a hardware problem as well.

The CCX is a design decision AMD made for Ryzen which result in problems you're seeing in games and other complex software which have a small amount of heavy weight threads. You can say it this way if you prefer - doesn't change the fact that this is a problem of Ryzen's h/w design.

You're playing with words and this is completely pointless as no matter how you call some thing it's still this thing in the end.

The fact that 6900K is running worse on "Ryzen patch" should tell you pretty much all you need to know about AotS as a fair benchmark of anything. This however has no relation at all to the fact that Ryzen is having problems in games due to how it is built in the first place.

I also laughed pretty loud reading about all major companies fixing all existing games for Ryzen in two weeks. You're delusional.
 

Locuza

Member
PCGH found the reason why Ryzen and Broadwell-E have such terrible minimal and average FPS, the game simply doesn't compute a lot of the particle effects if you have only four physical cores, SMT doesn't matter.
This only happens in the game itself, the integrated benchmark doesn't scale the graphics based on physical cores.
If you disable 4 cores on Ryzen the FPS are automatically increasing because the game scales down the graphics.
http://www.pcgameshardware.de/commoncfm/comparison/clickSwitch.cfm?id=138531
ryzenwtawv.jpg

http://www.pcgameshardware.de/Ryzen-7-1800X-CPU-265804/Specials/AMD-AotS-Patch-Test-Benchmark-1224503/
 
Jim/AdoredTV just found out something really interesting: the CPU bottleneck in Tomb Raider DX12 is much less pronounced with ryzen as soon as you switch from an highly overclocked 1070 to a crossfire 480 setup (at stock speeds - which should put both setups roughly on par GPU power wise):

ryzenghs9x.png


watch video for contex:

https://www.youtube.com/watch?v=0tfTZjugDeg
 
He also says that a 30-35% difference in performance is unreasonable.
If we look at IPC, Intel was about 7% ahead if I remember correctly. Combine that with 25% higher clockspeeds and we're at 34% faster per-core performance.

If this is your attempt at a joke then right on, it's about time you loosened up and that had me chuckling.

If not, in what parallel universe does CPU clockspeed + IPC correlate directly to FPS in games? This takes the biscuit in this thread for most 'out there' reasoning.
 

Datschge

Member
Jim/AdoredTV just found out something really interesting: the CPU bottleneck in Tomb Raider DX12 is much less pronounced with ryzen as soon as you switch from an highly overclocked 1070 to a crossfire 480 setup (at stock speeds - which should put both setups roughly on par GPU power wise)
Somebody on another forum called it an Nvidia DX12 API bottleneck. It's probably Nvidia's DX12 driver needing adaption/optimization for Ryzen.

The CCX is a design decision AMD made for Ryzen which result in problems you're seeing in games and other complex software which have a small amount of heavy weight threads. You can say it this way if you prefer - doesn't change the fact that this is a problem of Ryzen's h/w design.
It only becomes a "hardware" problem if the software scheduler is too stupid to randomly spread and move around those small amount of heavy weight threads across the two CCXes and the developer of said scheduler pretends everything "works" as intended.
 
Top Bottom