• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[Rumour] Intel to stop sharing detailed turbo clocks per core starting with CL

btrboyev

Member
Intel screwing the pooch for so long now. Especially in the mobile space where they let ARM chips run all over them.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Kinda sounds like they’re borrowing AMD’s Athlon marketing strategy (remember “3200+” chips that don’t actually run at 3.2ghz?)

Not a great look.
Entirely different scenario - AMD 'performance rating' campaign did not evade establishing an actual clock for their parts; the fact the clock was lower than the 'performance rating' clock does not void that. Here rumor is Intel refuse to disclose a low-end clock for their SMP configs at full load -- that gives them entirely different leverage.
 

SRG01

Member
That just makes me suspicious that their manufacturing samples are underperforming. Hiding specs isn't a good way to make your product attractive, Intel.

It's been known for years that they've been having trouble with their process nodes, hence the perpetual tick-tocktocktocktock of late.

However, they're still delivering massive profit and value to their shareholders so nothing will change until they report downward guidances or a huge miss.

Intel screwing the pooch for so long now. Especially in the mobile space where they let ARM chips run all over them.

x86 was never going to be a huge hit in the mobile space, as the architecture isn't well optimized for it. Whoever was pushing that had their head in the clouds.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
x86 was never going to be a huge hit in the mobile space, as the architecture isn't well optimized for it. Whoever was pushing that had their head in the clouds.
Oh, Intel managed to convince entire generations of COS majors that architecture does not matter... It turned out in the end it did.
 

Quasar

Member
The point is your production runs don't have to adhere to specs - ultimately you don't need to bin, as effectively everithing is "up to spec".

That's going to be great for buyers. Can't even count on reviews as your particular chip might run crappier or better.
 
This is Intel's third go around on re-releasing the Skylake architecture. They must be frustrated at this point in how many delays Cannonlake has had. I mean Skylake, Kaby Lake, and Coffee Lake are literally the same thing 3 times in a row. Kaby Lake had very small clock speed bumps but it's obvious that they are at the architecture's limit which is why they are hiding the turbo clocks on Coffee Lake.

Ryzen could not have arrived at a worse time for Intel. The serendipity of releasing Ryzen exactly when Intel was stalled on their next process improvement is astounding for AMD.
 

Neith

Banned
I'm gonna keep my new 2600K lol. It's running absolutely great. No new motherboard, same 2133 ram, and a decent boost in games. Sandy Bridge fo life.
 
x86 was never going to be a huge hit in the mobile space, as the architecture isn't well optimized for it. Whoever was pushing that had their head in the clouds.

Whoever was pushing that will one day be remembered for being responsible for Intel's ultimate downfall. Intel was once a leading producer of ARM chips (remember the StrongARM?) and they sold that division off to focus on x86. Had they stayed in ARM, who knows how different the world might have been.

This info posted is confidential though :) or at least videocardz said so

Videocardz vets their info a lot better than most places. Then again, "most places" includes the likes of WCCFTech and Fudzilla and Tweaktown so it's not that high a bar to clear.
 

diablogod

Member
I'm gonna keep my new 2600K lol. It's running absolutely great. No new motherboard, same 2133 ram, and a decent boost in games. Sandy Bridge fo life.

I'm with ya lol, maybe when DDR5 is the standard and this competition between AMD and Intel really heats up will I upgrade. For now 1080p on my 2600K / GTX 1060 seem to be more than fine for the foreseeable future.
 

liezryou

Member
More anti consumerist bullshit. This is exactly why we need competition.

I'm gonna keep my new 2600K lol. It's running absolutely great. No new motherboard, same 2133 ram, and a decent boost in games. Sandy Bridge fo life.

Intel finally decided to do more then 5% performance improvement generation to generation after 5 YEARS. Thanks AMD. While your 2600k is running great right now, i wouldn't expect the same over the next 5 years if AMD manages to not screw up what they have here.

This is why [Intel/Nvidia] fanboy-ism and mindshare needs to stop. You are literally rooting for a potential monopoly that would be completely anti-consumer. Imagine what our mainstream processors would be like if AMD could actually compete the past 5 years.
 

Mr_Moogle

Member
How can a company that spends so much of R&D find itself at a disadvantage to AMD? What the hell has Intel been spending the money on? Does it all get funneled to mobile?
 

pa22word

Member
Is Coffee Lake supposed to be a stop-gap CPU before Ice Lake?

Who knows at this point. It feels like the last few -lake series have all been relegated to stopgaps. People were saying to skip kaby earlier this year because Cannon was going to be the next big thing, and here we are. Someone else posted about the tick-tocktocktocktock nature of intel's seemingly broken process advancement over the past few years, so at this point I'll believe in 7/10nm when I see it, honestly.
 
Well to use an example, price to performance ratio on the Ryzen 5 1600 making all of Intel's current i5's look like a waste of money.

I don't see how that means Intel has bad R&D? If anything I think it just means Intel has been able to sit on fat profit margins for a long time.
 

Timeless

Member
How can a company that spends so much of R&D find itself at a disadvantage to AMD? What the hell has Intel been spending the money on? Does it all get funneled to mobile?

Well to use an example, price to performance ratio on the Ryzen 5 1600 making all of Intel's current i5's look like a waste of money.
Spitballing here, but maybe it's no credible competition making them charge higher prices. Their margins might have been very good. AMD could be forcing their hand into cheaper / better parts.
 

liezryou

Member
How can a company that spends so much of R&D find itself at a disadvantage to AMD? What the hell has Intel been spending the money on? Does it all get funneled to mobile?

Well to be fair, you have to understand intel has it's hands in ALOT of cookies. I would wager the majority of it's R&D budget does not go towards consumer pc cpus.
 

Aroll

Member
Sounds like they are more worried about AMD than they are admitting.

- Won't even say how big the die is
- Don't want the PR spin on clock speeds, which some general consumers actually use in purchasing decisions

Why? Because of course, releasing any of that info before they are out in the wild with reviewers running the tests gives AMD a heads a up, and they could likewise respond.

I wonder if they got a bunch of those Threadrippers in house and realized something - like holy crap, AMD has their stuff together.

It's a very interesting arms race we're in right now. With nVidia confirming no more new GPU tiers this year (which kinda keeps vega competitive long enough where AMD can have someone new come out when nVidia does), AMD's ryzen stuff just taking a big fire cracker to the entire market place, threadripper included, and intel having a rushed response that underwhelmed and few people really think they need to exist... (not talking about the 7700 or whatever, you know which cpu's I'm talking about)

So yeah, it's interesting. It's been a LONG TIME since intel and nVidia had any TRUE competition for all spectrums of computer building, from low end to enthusiast, to even server side. And AMD really, really brought it this year, with well thought out products that improved so fast after launch that right now they are all stable with basically every application.

Of course, there is a huge difference in HBM2 versus what nVidia has done so far, but eh, you know, different strokes.


I love that this is the world we live in right now. I miss these days from when I was just a little kid, when amd, intel, nvidia... everything was super competiitive and you couldn't really go wrong and more importantly, having options.

As a youtube content creator, I wish I built my editing rig a year later. I just didn't trust AMD to give me so many cores like they did. So I went with a cheapish 6700k because I couldn't afford intel's cpu's that had all teh cores I wanted for rendering.

But if I knew ryzen and threadripper would be this good, I would have waited. No regrets on my gtx 1070 given how much stuff has blown up price wise thanks to crytomining. Was lucky to get it at the price I did.

This is a great time man. can't wait to build a new machine.
 

SRG01

Member
Well to use an example, price to performance ratio on the Ryzen 5 1600 making all of Intel's current i5's look like a waste of money.

There are three:

- Intel is used to playing in a monopoly market
- Intel is using a huge monolithic chip, leading to both performance and yield issues
- Intel is having trouble with their current fabrication node

Whoever was pushing that will one day be remembered for being responsible for Intel's ultimate downfall. Intel was once a leading producer of ARM chips (remember the StrongARM?) and they sold that division off to focus on x86. Had they stayed in ARM, who knows how different the world might have been.

I think Intel still has an ARM license for their security co-processor? Or am I confusing that with AMD?

Who knows at this point. It feels like the last few -lake series have all been relegated to stopgaps. People were saying to skip kaby earlier this year because Cannon was going to be the next big thing, and here we are. Someone else posted about the tick-tocktocktocktock nature of intel's seemingly broken process advancement over the past few years, so at this point I'll believe in 7/10nm when I see it, honestly.

To be fair, critical feature sizes (aka gate widths) are unreliable indicators of a process as each manufacturer uses different metrics. IIRC, Intel's 10nm is actually the smallest on the market right now as it's a real measure of gate width whereas others typically use line widths instead.

I can't find the feature size diagram that has been floating on the net for the past while... my Google-fu is failing me.
 

Renekton

Member
Some images for you, including die shot. For the first time since Sandy Bridge, the CPU is large than the GPU? :)

aiG5PS0.jpg

 
So on top of needing a new motherboard, Intel's increased prices across the board:

0zGVXjX.png

Eh, I do think there is some leeway considering they've increased the core counts. Getting a 6/12 i7 for under $400 that hits 4.7ghz on one core or 4.3ghz on all cores I think is a pretty good deal.
 
Eh, I do think there is some leeway considering they've increased the core counts. Getting a 6/12 i7 for under $400 that hits 4.7ghz on one core or 4.3ghz on all cores I think is a pretty good deal.

They've increased core counts on their mobile 8th-gen line and prices stayed the same.
 
They've increased core counts on their mobile 8th-gen line and prices stayed the same.

Maybe it was cheaper in R&D to increase them on mobile than the desktop parts? I dunno. Just my opinion that the prices don't seem unreasonable (to me), but I can see how someone might feel otherwise.
 
Really not much of a reason not to go Ryzen on new builds unless you are purely aiming for the very top-end of gaming performance, and even then you can probably just go 7700k and save yourself some cash.

After X299 this looks like another stinker.
 

Goo

Member
The 8400 is priced to move. If the turbo works well and doesn't cause stutters in games that might be the best bang for the buck in this series of CPUs. It will be interesting to see how it compares to ryzen 1600, ryzen's smt throughput is pretty good, it might be good enough to edge out a win in multithreaded tasks.
 

synce

Member
I've always went Intel just because they're simply more powerful than AMD clock for clock, but if this is how they're going to do business I'd rather have a weaker CPU than support Intel. For years now there's been a pattern of screwing over their customers just because they think they can
 

Kayant

Member
I have to say am loving the marketing and how they are doing everything not to have a single inferior point to the competition 😂.

Also is per core overclocking really new? Unless am missing something my i5 4690k is able to do that?
 
I've always went Intel just because they're simply more powerful than AMD clock for clock, but if this is how they're going to do business I'd rather have a weaker CPU than support Intel. For years now there's been a pattern of screwing over their customers just because they think they can

Well I think Intel is not in a good position to keep fattening margins right now. Charging twice as much for 5-10% more performance is really not going to make customers happy. Assuming GlobalFloundries can actually keep up with production, AMD has a chance to claw away a lot of market share in the next year or so while Intel tries to figure out when they'll be able to launch Cannonlake.

And Cannonlake was the "tick" in the old "tick-tock" meaning it's not even the new architecture so if Intel isn't able to make it really any faster than Skylake/Kaby Lake/Coffee Lake then the stagnation of Intel performance is probably going to run all the way through 2019 or whenever Ice Lake is supposed to debut.
 

dr_rus

Member
Well the Turbo clock is reveled.

This info posted is confidential though :) or at least videocardz said so

Turbo clocks won't go anywhere either, as well as Turbo Max clocks. What they decided to hide is this:

turbos2_575px.png


What they give for CFL is base, all core turbo and turbo max but not the intermediate steps for 2C and 4C clocks. Granted, as I've said, this doesn't make much difference for a 6C CPU but not having this info for HEDT 10C+ CPUs would be sad.
 

LordOfChaos

Member
They're increasingly obfuscating core details from consumers, it's really annoying.

Core M becoming Core i (to be fair, becoming again, but it was still better split) so you have to go hunt for the rest of the model number that a PC maker may not list outright, Celeron covering both Braswell (puke) and broadwell (good), etc.


Kinda sounds like they're borrowing AMD's Athlon marketing strategy (remember ”3200+" chips that don't actually run at 3.2ghz?)

Not a great look.

AMD did that to dispel the megahertz myth back then, it was the performance equivalent for a Pentium clocked at that (it only broke down at the end of that cycle). They also never hid away actual clock speed details.

Oh, Intel managed to convince entire generations of COS majors that architecture does not matter... It turned out in the end it did.

That Post-Itanium x86-or-bust PTSD, that even pushed them to (try to) make a GPU based around x86 cores (lol).
 

LordOfChaos

Member
Some images for you, including die shot. For the first time since Sandy Bridge, the CPU is large than the GPU? :)

I don't think that's unusual the last few gens for a quad core part with a lower end GPU. It was mostly their dual core ULVs that dedicated a lot more to GPU, since the core size was halved (+ less cache) and more GPU units were sometimes added so it could run at lower power.
 

ethomaz

Banned
That's kind of the point. This information was always going to be found. The key is that Intel isn't making any public guarantees about it.
Turbo is configurable, no?

I mean there is option in BIOS to set the multi for turbo mode... at least in my old CPUs I can set it like I wish.

Turbo clocks won't go anywhere either, as well as Turbo Max clocks. What they decided to hide is this:

turbos2_575px.png


What they give for CFL is base, all core turbo and turbo max but not the intermediate steps for 2C and 4C clocks. Granted, as I've said, this doesn't make much difference for a 6C CPU but not having this info for HEDT 10C+ CPUs would be sad.
So what they are hiding is how it scales across the number of cores in use? The max turbo clock with only one core being used you will get like usual?
 
Top Bottom