• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Intel X299 platform first wave reviews - i7 7740X/7800X/7820X and i9 7900X

Jebusman

Banned
Just btw. that's not true. A PSU rated at 550W means it can (well, unless it's shitty) deliver 550W to your components. Depending on its efficiency, it can draw way more from the wall and still work perfectly fine within it's official limits.

To follow up (because that previous post was bugging me), efficiency means how efficient it is turning power from the wall into power your machine can use.

A PSU running at 80% efficiency means that 80% of the power it's drawing from the wall is being converted for actual use, the other 20% is wasted (generates heat).

So for example, if you had a 500W power supply, that ran at 80% efficiency while at 100% load, it would actually draw around 625W from the wall. (625W x 0.8 = 500W). The extra 125W would be wasted as heat.

This is why PSUs with higher 80 Plus certs can run in a semi-fanless or completely fanless modes at low levels, as they get the heat waste down to levels that can be passively dissipated.

A 500W Titanium PSU running at 50% capacity is supposed to have around 94% efficiency, only losing 6% as heat. 500W x 0.06 = 30W heat waste. More than enough for the heatsinks inside to handle it without the fan kicking in.

So if someone is pulling 550W from the wall running their system as hard as they can, they're not actually using 550W in their machine. Even at the lowest acceptable efficiency for an 80 Plus cert (80%) you'd still only be using about 440W (and generating 110W of heat).

This has been your PSU PSA.

You don't want your PSU to run at half capacity either.

eff-comparison.png


http://www.anandtech.com/show/2624/3

50-70% draw is typically where you see maximum efficiency in most PSUs nowadays.
 

Paragon

Member
If you're looking to build a PC purely for gaming, these CPUs shouldn't even be on your radar. Look to the 7700k, ryzen 1700, 1600, or, on the cheap end, a 1500x. Those will get you the best gaming performance.
That's true.
I'm not sure what happened, but it seems like a lot of people suddenly think that you need HEDT for gaming.
Maybe it's because Ryzen made 6-core and 8-core CPUs mainstream, and now people think they also need 6 or 8 core CPUs if they want to buy Intel instead?

I know someone that's been talking about building their first gaming PC for maybe 6 months now - looking to spend less than $1000 and get the best performance per dollar that they can.
Last week they started asking me all these questions about what X299 hardware they should be looking at, without realizing that a 6-core system starts at $650 just for the CPU & Motherboard, when they had previously been questioning whether it was worth an extra $100 to get an i7 instead of an i5, or a few extra bucks to get a Z270 board instead of an H270 board. (that half of the conversation was before Ryzen, when Intel was the only viable option for gaming)

It's more me thinking, do I build a new PC this year and maybe get a PS4 to play all the exclusives I've missed sometime next year, or PS4 this year and PC next year? I have a PS3, and there are a bunch of franchises that I've been wanting to get on.
edit: specifically, I want to upgrade *now* but the PS4 will come down / get bundles soon, and better mobos are certainly coming soon.
Well I still don't see the how the two are linked, but it's not going to take a year to sort this out.
If you wait a year, you'll be looking at X399 instead.
And if you're just building a gaming system, you shouldn't be looking at X299/HEDT unless you have money to burn.
As Steel said, you should probably be looking at an i7-7700K or an R5-1600X if it's purely for gaming.

EDIT:
I'm want to get on the new gen, I've skipped LGA1151 so far, so buying it now seems like a waste.
Wait for Coffee Lake then.


Can we get another topic for power supply discussions?
The Ryzen thread was recently flooded with this too.

I don't really want to engage with this topic, but there are so many factors being ignored.
  • Power supply design has come a long way in the last decade. (your source is from 2008)
  • Higher wattage power supplies focus more on efficiency than lower wattage supplies, generally making their idle power draw similar or equal to a lower wattage power supply. The posts in the Ryzen topic read like it's going to draw an extra 100W when idle if your power supply is a bit over-specced when it's likely under 10W.
  • They run cooler, and are often so efficient that they can remain fanless under load, while lower wattage supplies run the fan at high speeds for the same load. (and that fan may increase power draw by another 5W or so)
  • Longevity is increased when running a power supply at lower loads vs near full load.
  • Efficiency peaks at 50% load on the majority of power supply designs today, and you want peak efficiency at maximum load where an extra percentage or two can mean double-digit differences in power draw, rather than fractions of a watt at idle. To be fair, the losses are usually minimal at 60-70% load now and maybe that matters if your PC runs 24/7 and sits idle most of the time without ever being shut down or going to sleep, but absolute peak efficiency is generally at 50% load.
 
In summary, Skylake X is decent, but consider Ryzen 7s,since they're substantially cheaper and is fairly close in non gaming tasks. Kaby Lake X is completely pointless, stick with the regular Kaby Lake.
 

Steel

Banned
I'm want to get on the new gen, I've skipped LGA1151 so far, so buying it now seems like a waste.

It wouldn't really be considering how much cheaper it has gotten. If you have to buy the new and shiny, you could go with Ryzen. Of course whether or not you really want to upgrade is dependent on what you're upgrading from.

That's true.
I'm not sure what happened, but it seems like a lot of people suddenly think that you need HEDT for gaming.
Maybe it's because Ryzen made 6-core and 8-core CPUs mainstream, and now people think they also need 6 or 8 core CPUs if they want to buy Intel instead?

I know someone that's been talking about building their first gaming PC for maybe 6 months now - looking to spend less than $1000 and get the best performance per dollar that they can.
Last week they started asking me all these questions about what X299 hardware they should be looking at, without realizing that a 6-core system starts at $650 just for the CPU & Motherboard, when they had previously been questioning whether it was worth an extra $100 to get an i7 instead of an i5, or a few extra bucks to get a Z270 board instead of an H270 board. (that half of the conversation was before Ryzen, when Intel was the only viable option for gaming)

I feel like people are just waiting for intel to smother Ryzen in performance per dollar when the X299 platform isn't even supposed to be used for the same type of computers as Ryzen(and even still, Ryzen competes with them in those).

In summary, Skylake X is decent, but consider Ryzen 7s,since they're substantially cheaper and is fairly close in non gaming tasks. Kaby Lake X is completely pointless, stick with the regular Kaby Lake.

I don't even understand why Kaby Lake X exists.
 

holdthephone

Neo Member
7800x: I'm sitting on 4.5 GHz @ 1.14v on air cooling (Noctua U12s). ~75-77 degrees during heavy usage.

In Cinebench R15 I'm looking at 1474 cb for mulitcore and 195 cb for single-core.
 

dr_rus

Member
7800x: I'm sitting on 4.5 GHz @ 1.14v on air cooling (Noctua U12s). ~75-77 degrees during heavy usage.

In Cinebench R15 I'm looking at 1474 cb for mulitcore and 195 cb for single-core.

75-77C would be a bit out of my personal comfortable range for a CPU temperature tbh.
 
https://www.youtube.com/watch?v=2-4D_j7pG1M

OC3D doesn't have the same issue as Der8auer so could be PSU or board differences.
There's a reason I often try to post as many varied sources for reviews, or points of interest such as this, which are being presented as "fact" instead of mere opinion or speculation.



Silicon Lottery:

https://www.techpowerup.com/234744/intel-x299-platform-called-a-vrm-disaster-by-overclocker-der8auer
http://www.overclock.net/t/1631319/skylake-x-binning/150_30#post_26189935

Silicon Lottery said:
I am having trouble with some of these X299 motherboards. I've bought a wide variety for this launch, and none of them are really handling the load of an overclocked 7900X as well as I'd expect. VRM temps through the roof and boards throttling.



Ian ("8Pack")Parry:

https://www.youtube.com/watch?v=f7BqAjC4ZCc&lc=z12ghpdblwiqjhsa004cfzoxjsuufb4jp14



https://www.youtube.com/watch?v=f7BqAjC4ZCc&lc=z13jxtkgpxjdz5heg23kwvib2lbavvus4



https://www.youtube.com/watch?v=f7BqAjC4ZCc&lc=z13jxrspikazfzetf23pdpgh0z3vcpuqs04.1498724343354617



https://www.youtube.com/watch?v=f7BqAjC4ZCc&lc=z13kcnopnpyrsh0cy22ofh4ajoedcpc3l



https://www.youtube.com/watch?v=f7BqAjC4ZCc&lc=z12vgdtzbymfwbkpz04cfdaw2yu1zn2joxs0k



http://forum.hwbot.org/showpost.php?p=490710&postcount=5
der8auer said:
ProRules said:
Hey fellas.
So we've all probably seen this video:
https://www.youtube.com/watch?v=OfLaknTneqw
And so as he tested the cpu went to extremely high wattage consumption of over 400W.
Now what im asking is, if its just beacause his cpu throttled, and his cooler was crap? i mean cmon h100 or any other aio is bad realy, if im pairing it with 2 360 rads just for cpu and gpu, will it still throttle the wattage like that?
The CPU can run 5 GHz on Apex with H100. So if it can only run 4.5 or 4.6 on a different board it's for sure not the chip or cooler
smile.gif
The CPU is delidded with liquid metal



https://www.youtube.com/watch?v=f7BqAjC4ZCc&lc=z13hebb5rke4ideh204cin4aqziqfblqack



https://www.youtube.com/watch?v=f7BqAjC4ZCc&lc=z12qwlpjpn3wwn3hi04cfz1ojkusux0plno0k



https://www.youtube.com/watch?v=f7BqAjC4ZCc&lc=z12re5srxtn5hbstp23jsvzhunfhtjdwe04.1498946727301150



http://www.overclock.net/t/1633285/yt-the-x299-vrm-disaster-en/210_30#post_26201375
The Stilt said:
Considering the power consumption of an overclocked 7900X, the VRM issues shouldn't come as a surprise to anyone.

Hardware.fr recorded following VRIn powers on a 7900X, during Prime95 (28.10):

4.0GHz = 158.0W
4.1GHz = 177.6W
4.3GHz = 212.4W
4.4GHz = 240.1W
4.5GHz = 265.2W
4.6GHz = 308.4W
4.7GHz = 362.8W

At 4.3GHz even the most efficient VRMs will be dissipating < 32W of power and at 4.7GHz > 54W.
There is no way you can dissipate such amount of heat with the standard sized VRM heatsink, without a serious direct airflow to it.
Beyond 300W you don't want to use a single EPS12V connector either.



Intel Core i9-7900X and Core i7-7740X test: déjà-vus?
http://www.hardware.fr/articles/964-8/overclocking-pratique.html

Google Translate:

To perform this test, we use a (huge) Noctua D15 radiator with two fans. We indicate the voltage (VID), the consumption at the socket as well as measured at the ATX12V. The reported temperature is that of the internal probes of the processor, we use the "Core Max" probe of hwinfo64 which reports the maximum temperature of the cores. Let's see what that gives!
img0054346sxu8n.png

Google Translate:

We start with the Core i9-7900X which does not demerit totally. If we put aside the fact that it does not really turn at 4 GHz at its default voltage, it rises fairly easily in frequency. Consumption, on the other hand, explodes quite rapidly and at 4.6 GHz almost doubles the consumption drawn from the ATX12V! It is the temperature in the end that blocks us since although stable at 4.7 GHz, in this configuration some cores begin to throttler as reported by hwinfo64.
 

elelunicy

Member
It's crazy how well those new chips overclock. I managed to get my 7900X to 4.6GHz on all core with just 1.15v (with a temperature of mid-70s under non-AVX full load, using a 280mm AIO). Meanwhile I could not get my old 4770k to stable at 4.5GHz, even at 1.35v.

 

Renekton

Member
It's crazy how well those new chips overclock. I managed to get my 7900X to 4.6GHz on all core with just 1.15v (with a temperature of mid-70s under non-AVX full load, using a 280mm AIO). Meanwhile I could not get my old 4770k to stable at 4.5GHz, even at 1.35v.
Wow, in that case it has good potential with de-lid.

Silicon Lottery did mention that 7900X has much better binning than 7800X and 7820X, in that 7900X's individual cores tend to produce less heat.
 
I think Skylake-X is dead on arrival. There's just no hype for it anymore. Way too much bad press. Hopefully this means they push forward Coffee Lake and we indeed see it in August/September like we're all hoping.
 

dr_rus

Member
I think Skylake-X is dead on arrival. There's just no hype for it anymore. Way too much bad press. Hopefully this means they push forward Coffee Lake and we indeed see it in August/September like we're all hoping.

It's a CPU which starts at $600* at retail, what do you expect? There never were any hype for such CPUs.

* - well, technically it starts at $400 but it makes little sense to buy a 6C SKX when it's lower clocked than an 8C and there's a S1151 6C incoming.
 

Durante

Member
Computerbase published their gaming tests.

corei7_meshp8akb.png


Apparently, just like AMD's "generalized" interconnect, the new Intel mesh fabric is not as well-suited to gaming (read: sucks for some games) as their old custom ring bus for each CPU.

If this was an AMD CPU I guess the general idea might be "no problem, the games will be patched", but honestly this just makes me even more happy with my decently overclocking 5820k.
 

Datschge

Member
Apparently, just like AMD's "generalized" interconnect, the new Intel mesh fabric is not as well-suited to gaming (read: sucks for some games) as their old custom ring bus for each CPU.

If this was an AMD CPU I guess the general idea might be "no problem, the games will be patched"
, but honestly this just makes me even more happy with my decently overclocking 5820k.
Unlike Zen where latency is much lower within one CCX than outside it Intel's mesh is offering stable (high) latency so there is nothing to optimize for in software aside accounting for high latencies in general (or try to not exceed the L2 cache). In the end i9 is what it is, a server die put onto a HEDT platform, binned for overclocking.
 

Durante

Member
Unlike Zen where latency is much lower within one CCX than outside it Intel's mesh is offering stable (high) latency
Do you have a source for that (actual full matrix core-to-core latency benchmarks)? Generally, mesh topology core-to-core latencies are dependent on routing distance, which is even more volatile unless controlled for in software than the two distance classes on consumer Zen. A friend did communication latency benchmarks on the Intel SCC (a spiritual predecessor of sorts, at least in terms of interconnect) and that was the result.
 

elelunicy

Member
Computerbase published their gaming tests.

corei7_meshp8akb.png


Apparently, just like AMD's "generalized" interconnect, the new Intel mesh fabric is not as well-suited to gaming (read: sucks for some games) as their old custom ring bus for each CPU.

If this was an AMD CPU I guess the general idea might be "no problem, the games will be patched", but honestly this just makes me even more happy with my decently overclocking 5820k.
The mesh can be easily overclocked and thus close the gap. I've OC'ed mine from 2.4GHz to 3.2GHz and seen close to 10% improvement in some gaming benchmark from just doing that.
 

Datschge

Member
Do you have a source for that (actual full matrix core-to-core latency benchmarks)?
https://www.pcper.com/reviews/Proce...X-Processor-Review/Thread-Thread-Latency-and-

The mesh can be easily overclocked and thus close the gap. I've OC'ed mine from 2.4GHz to 3.2GHz and seen close to 10% improvement in some gaming benchmark from just doing that.
Same as with Ryzen, and a clear improvement over Intel's ring bus which was pretty memory speed agnostic.
 
I hope somebody tests Planet Coaster with the i9.
Planet Coaster is probably the most CPU intesive game out there when you try to build a giant park with thousands of people if you want to play at 60fps.

Its a game i dont mind playing at 20fps when things get big though. Its pretty stable and you dont need fast reflexes for it, but would be nice to watch a full park running at 60 without the cheatengine video technique.
 

Durante

Member
Well, that's not flat, but more flat than I would have expected. Still, the difference seems to be almost 10% between adjacent cores and the most distant cores.

Same as with Ryzen, and a clear improvement over Intel's ring bus which was pretty memory speed agnostic.
Is technology B which you can overclock to be almost as good as technology B really a clear improvement over B? ;)

The mesh advantage over the ring is in how easy it is to slap on a chip, and maybe in scalability, but certainly not in how well it works once it is designed for a specific chip (of at least up to ~12 cores).
 

Datschge

Member
Is technology B which you can overclock to be almost as good as technology B really a clear improvement over B? ;)

The mesh advantage over the ring is in how easy it is to slap on a chip, and maybe in scalability, but certainly not in how well it works once it is designed for a specific chip (of at least up to ~12 cores).
The problem with the ring bus is that every additional core increases the latency globally. For that reason Intel opted to go with two separate ring buses in XCC dies before which resulted in a NUMA design with an additional penalty in communication between the buses. The mesh design is much more scalable wrt to core count, and it being more affected by memory speed is a consequential side product of this.

I expect Intel to keep using the ring bus in consumer chips which won't grow in cores as much. If low latency is preferred over core count those will have to be chosen over server-derived HEDT chips. Waiting for Coffee Lake would be the option there I guess.
 

elelunicy

Member
I hope somebody tests Planet Coaster with the i9.
Planet Coaster is probably the most CPU intesive game out there when you try to build a giant park with thousands of people if you want to play at 60fps.

Its a game i dont mind playing at 20fps when things get big though. Its pretty stable and you dont need fast reflexes for it, but would be nice to watch a full park running at 60 without the cheatengine video technique.
I would imagine it's ultimately bounded by single thread performance, just like most games in the simulation/strategy genre. The i9's most likely don't offer an improvement over, say, the 7700k.

It's actually one of the main reasons I went with Intel again over the Ryzens/Threadripper as I play too many simulation/strategy games where the single thread performance is too important. My currently most played game - The Sims 4 - is simultaneously a DX9 game where building houses with thousands of objects results in seriously low fps as DX9 only uses one thread to push draw calls, as well as a game where the main simulation runs on a single thread & has a ton of simulation lag issues.
 
I would imagine it's ultimately bounded by single thread performance, just like most games in the simulation/strategy genre. The i9's most likely don't offer an improvement over, say, the 7700k.

It's actually one of the main reasons I went with Intel again over the Ryzens/Threadripper as I play too many simulation/strategy games where the single thread performance is too important. My currently most played game - The Sims 4 - is simultaneously a DX9 game where building houses with thousands of objects results in seriously low fps as DX9 only uses one thread to push draw calls, as well as a game where the main simulation runs on a single thread & has a ton of simulation lag issues.

If i heard correctly from frontier it actually tries to use all the power of the cpu, even multithreaded. I may be in the wrong though, but as I said, I remember some topic about the ryzens where the programmer, andy c, said it would work better.
Also the most cpu expensive stuff in Planet Coaster is the guests from all tests we have done, (i have a park were I used 300.000 objects, and without guests it works pretty fine) so it already seems it works differently than the sims 4 (wasnt that game programmed pretty crapily? DX9 still?). What Frontier had said multiole times is that Planet Coaster would be a game with notm sequels soon and that would eveolve to a greater gane with better PCs in the future.
 

elelunicy

Member
If i heard correctly from frontier it actually tries to use all the power of the cpu, even multithreaded. I may be in the wrong though, but as I said, I remember some topic about the ryzens where the programmer, andy c, said it would work better.
Also the most cpu expensive stuff in Planet Coaster is the guests from all tests we have done, (i have a park were I used 300.000 objects, and without guests it works pretty fine) so it already seems it works differently than the sims 4 (wasnt that game programmed pretty crapily? DX9 still?). What Frontier had said multiole times is that Planet Coaster would be a game with notm sequels soon and that would eveolve to a greater gane with better PCs in the future.

Use all threads != use all threads equally. A game can use all of your cores but still be bounded by single thread performance. For example, Cities: Skylines dev openly admitted on Reddit that the game can use all of your cores for things like render and sound, but the main simulation runs on one CPU thread.

For Planet Coaster I just downloaded a 2000-guest park from the workshop and it ran about 20fps for me (no GPU bottleneck here as I ran it in a small 1080p window; the CPU is clocked at 4.6GHz and paired with 3600MHz RAM). Process Explorer shows that the game uses many threads, but one thread clearly uses more CPU than the rest (the 7900x has 20 threads so just 5% CPU would max out what a single thread can do).

 
Reviewers continue to focus largely on latency, while ignoring the importance of software optimisations and platform improvements. That there are latency penalties is not in question, but it's fairly obvious in quite a number of cases where updates have been implemented, or where they may be needed (not suggesting this applies to every instance of a performance discrepancy).


Hardware.fr clock-for-clock and HT/SMT enabled -vs- disabled:

http://www.neogaf.com/forum/showpost.php?p=242210508&postcount=3357




Core X: Problems with memory and performance are reminiscent of Ryzen [Google Translated]
https://www.computerbase.de/2017-07/core-x-probleme-speicher-leistung/


  • Problems with memory at high clock and low timing
  • Performance in games varies with the mainboard

Google Translate:

To the launch of Core X on June 19 still equipped with little hardware, ComputerBase has now carried out more detailed tests with Intel's new platform. The problems observed in the test of the Core i9-7900X have been confirmed: The X299 platform is currently suffering from memory and performance.



Computerbase published their gaming tests.

corei7_meshp8akb.png


Apparently, just like AMD's "generalized" interconnect, the new Intel mesh fabric is not as well-suited to gaming (read: sucks for some games) as their old custom ring bus for each CPU.

If this was an AMD CPU I guess the general idea might be "no problem, the games will be patched", but honestly this just makes me even more happy with my decently overclocking 5820k.

bf-1daskh.png


dow-3aasjg.png
f1-20163ssca.png


gr-w7hsx4.png
preyhzsm7.png


rise-troqsyu.png
tw-wubsil.png
 

Lonely1

Unconfirmed Member
Looks like so.Is kinda amazing that Crysis 3 is one of the best scaling multicore games out there after so many years. Still, no reason to upgrade from Haswell-E.
 

Erebus

Member
Unless I'm missing something looking at all these reviews/benchmarks (not X299 platform exclusively), I don't see how upgrading my 3570k will benefit me in any way when I'm only gaming at 1080p60.
 

rtcn63

Member
Unless I'm missing something looking at all these reviews/benchmarks (not X299 platform exclusively), I don't see how upgrading my 3570k will benefit me in any way when I'm only gaming at 1080p60.

It doesn't really, at least not to the point where you *need* a new build. Only in the more CPU-demanding games (i.e. Watch Dogs 2) will you start struggling to hit 60fps average, and of course, crazy shit like Ashes of the Singularity.
 

dr_rus

Member
New Intel CPU Cache Architecture Boosts Protection Against Side-Channel Attacks
https://www.bleepingcomputer.com/ne...osts-protection-against-side-channel-attacks/

According to G Data Principal Malware Analyst Anders Fogh, this small change in cache architecture has improved the CPU's security, as it thwarted some types of side-channel attacks.

The term of side-channel attack is used to define a type of attack used for leaking data from a computer's memory or CPU, usually focused on leaking data specific to encrypted operations.

The researcher believes this new cache architecture will block Flush+Flush attacks and some of the Rowhammer attacks, but not CLFlush and Flush+Reload.

"Having a non-inclusive L3 cache is significantly more secure from a side channel perspective than an inclusive in cross core scenarios," Fogh explains.
Despite the improvements, Fogh doesn't see this new cache architecture trickling down to mid and low-end notebooks and laptops anytime soon.

Nonetheless, most side-channel attacks are developed and aimed at high-end server products, usually deployed in cloud and hosting environments, where a side-channel attack could allow an attacker access to troves of enterprise data stored in the cloud.
 

Renekton

Member
Unless I'm missing something looking at all these reviews/benchmarks (not X299 platform exclusively), I don't see how upgrading my 3570k will benefit me in any way when I'm only gaming at 1080p60.
Some games still spike my i5 to below 60fps or weird frametimes.
 

Kayant

Member
Problem Number 2: Power
Skylake and Kaby Lake are different x86 microarchitectures – the KBL core design was meant to be an ‘optimization’ implementation of Skylake, hitting a few loose hanging fruit and using an updated 14nm process to give better power consumption and better voltage/frequency response from the silicon. There isn’t so much drastic change in the cores, but there is in how the power is delivered.

Skylake-X uses an integrated voltage regulator, or IVR. If you recognize the term, then that is because Intel launched its Broadwell based CPUs with a FIVR, or fully-integrated voltage regulator. Skylake-X does not go all-in like Broadwell did, but for some of the voltage inputs to the CPU, the processor takes in a single voltage and splits it internally, rather than relying on the external circuitry of the motherboard to do so. This affords some benefits, such as consistency in voltage delivery and to a certain extent, some efficiency power gains, and it should simplify the motherboard design - unless you also have to design for non-IVR CPUs, like Kaby Lake-X.

Kaby Lake-X is more of a typical power delivery design, with all the required voltages being supplied by the motherboard. That means that the motherboard has to support both types of voltage delivery, and also adjust itself at POST if a different CPU has been placed in. This obviously adds to the boot time to check if it is the same, but it does require some voltages to be moved around, as too high a voltage can kill a processor. We’ve already killed one.

Specifically, the VRIN voltage on Skylake-X needs to be 1.8V input into the processor for the IVR to work. The same setting on Kaby Lake-X needs to be 1.1 volts for VCCIO. If the motherboard originally had a SKL-X processor in it and does not detect when a KBL-X processor is in, then the motherboard will supply 1.8 volts into the KBL-X rail and the chip will soon die.

When we received samples for SKL-X and KBL-X, we were told by our motherboard partners that if we were switching between the two CPUs, we would have to flush the BIOS. This involves removing AC power when switched off, and holding the Clear CMOS button for 30-60 seconds to train the capacitors and essentially reset the BIOS to default, so it could then detect which CPU was in play before applying full voltages.

We did this, and still ended up with a dead Kaby Lake i7-7740X. There is now a lump of sand in my CPU box. The interesting thing is that this CPU did not die instantly: we started SYSMark, which involves several reboots during the initial preparation phase. On about the 4th reboot, the system stuck with the BIOS code 0d. Nothing I did was able to go beyond this code, and I put in our Kaby i5 and that ran fine. I put in SKL-X and that ran fine. I put the Kaby i5 in and that ran benchmarks fine. It would appear that our initial Kaby i7 did not have much headroom, and we had to get a replacement for some of the benchmarks.


Incidentally, we also had an i9-7900X die on us. That seems to be unrelated to this issue.

So The Solution?
Motherboard manufacturers have told us that there may be chip-specific motherboards out there in the future. But as it stands, users looking at KBL-X would save a lot of money (and headache) staying with Z270, as the motherboards are cheaper and more streamlined for a Kaby Lake design.

From the anantech encapsulates how much of a shitshow the addition of KBL-X on X299 is.
 

Kayant

Member
How often does a regular user swap a CPU on his motherboard?

Intel's own explanation for KBL-X is that they are entry-level CPUs to the x299 platform and buyers would later upgrade to bigger HEDTs.
.

Especially given it's something that a typical user isn't likely to remember to do down the line. And like in the case of anantech even said process can fail.
 

dr_rus

Member
Intel's own explanation for KBL-X is that they are entry-level CPUs to the x299 platform and buyers would later upgrade to bigger HEDTs.

Intel's own explanations can be whatever, I personally don't see why anyone would even buy KBX unless he wants to perform some extreme overclocking or has some task which requires fastest x86 sequential performance. In both cases switching to a SKX later would be downgrading.

That being said, do you expect these BIOS issues (which is what they are) to remain in place several months from now when someone may actually make such switch? I mean, yeah, it's a bad glitch which shouldn't have ever happened but this is a new platform and it was always like this with new platforms. This stuff will be fixed down the road.
 
Top Bottom