@Reflex Not _that_ dominant, they only had one album that I know of. Still, that one album is better than the entire works of Britney Spears and Taylor Swift. Together.
The cancer is from the site's corporate overlord. Ryan, Ian, etc have no control over it. The only defence we have it to use adblock and explicitly block the videos.
Not for gamers. If it's anything like the 9700k it will offer equal or better performance as the 9900k at a dramatically reduced price. This is the first time there has actually been a value buy at the very top end of gaming performance. It's funny you whine about it being overpriced. Shows how much you know.
Actually the price won't be much less maybe $25-50 off it sure, but the real kicker is the core speed should be a lot higher because no gpu onboard to add the excess heat and power draw.
They should of did this at the start, most people would agree that its kind of pointless to put a gpu on higher end CPU, when the majority are sold with gpu on lower CPU anyway.
Ok well somehow you read twice and missed the fact that i was comparing the 9700k to the 9900k. Which is why I clearly typed 9700k and 9900k. The difference is not 25-50$. Congrats man. You should probably just kill yourself if you are that dumb.
Why do you think they are value priced? They haven't even been launched and there is no pricing info. Even so I'd wouldn't expect them to be too much less, I mean how much does the GPU part really cost Intel? They may be 25.00 less if your lucky, IMO but I of course could be wrong :)
They are becasue the 9700k is value priced the f variant would have to cost way more for it to not be value priced Clearly you guys just cant read and comprehend though. Did I type an F? No your brain synthed it from nowhere. 000
There were HT parts at that price. Depending on what you did, they could perform just like a true 8 core or much worse. Either way, core is the new MHz these days...
I was curious why some models have no hyperthreading, but 8 cores without HT.
Just maybe people on finding out you really don't need that many cores and that it better to have faster cores instead.
CPU's manufactures have gotten sloppy - instead improving the architexture - they just add more cores - I exited about Sunny Cove, because this means architecture is independent of Node process.
Now also Memory, more memory now a days means developers ( which I am one ), can be sloppy in their programming - in stead count clock cycles they and optimizing access techniques - they can load data structures in memory and just depend on more memory.
In the old days, there was just one dimension to CPUs: the speed of a single core. Back then, it was possible to order CPUs from slowest to fastest. Now the problem has more dimensions: core count and even boost clocks (and how long they can be sustained). Today it's no longer possible to define a single metric by which one CPU is faster than another. It all depends on your typical workload. And yes, as you suspected, many workloads don't actually need that many physical cores.
One might be forgiven for thinking Intel would have launched this lineup without iGPU from the start, as that would potentially mean a smaller die, and therefore better yields/more die per wafer. But this is not exactly Intel's best few years of thinking ahead or adapting, so it makes sense they would completely miss the low hanging fruit solution to some of their supply issues, a solution AMD has been using already with high end parts that are EXPECTED to be paired with a discrete graphics device.
It is also possible, of course, that these are simply salvaged die from the original design where defects were restricted to the iGPU portion of silicon, and therefore just have the block disabled. That consideration aside, my point stands that Intel missed an option to ease their supply woes months earlier
These're almost certainly die salvages not new design. I'll be interesting to see what availability looks like, depending on how low it is they might not've done it sooner because it took until now to have a worthwhile number of dead GPU chips to create the SKUs. Unlike the smaller mobile chips, the big desktop one's don't have a lot of GPU area on the die; so CPUs with good compute cores, cache, etc, but a busted GPU are probably relatively rare.
Die salvages have for 4 years been i5/i3. That's why these "salvages" are just marginally cheaper than the full version. Intel did not throw away these CPUs before, just branded differently in the fake segmentation that Intel does. Intel's supply problem is not like the narrative says "unprecedented demand". The real story is that Intel has 2-3 14nm fabs that are at 100% capacity. When Intel starts adding more cores/die area = they actually manufacture less complete chips to sell. Intel thought 10nm would be online so there is simply no fab space when Intel added more cores to "compete" with AMD. The funny thing with "shortages" is also that Intel can charge more for their CPUs, so why should they care about producing more? (if we instead used ARMs we would complain about 8 core CPUs that cost 25-50 dollars. But somehow people are brainwashed that X86 prices are "normal" and that X86 is the "fastest", something it never has been. It's not even real 64bit, something RISC was in 1990. heck, you can't even run 32bit code on A SoCs today since there is no 32bit. Good luck trying to do this on X86. (you cant remove the 32bit registers since 64bit are extensions. Fake 64bit = why X86 is the ONLY CPU that did not show performance increase going from 32 to 64bit. Instead 3% decrease. Compile the same app in 32bit/64bit and run on A7 and you see the usual 30-40% speed increase going to 64bit. We REALLY need to move away from X86...
"The funny thing with "shortages" is also that Intel can charge more for their CPUs, so why should they care about producing more?" Er, maybe because increasing prices further makes them even less competitive now that AMD is breathing down their neck (and soon, with Zen 2 based CPUs, AMD will almost certainly pull ahead of them even in single core performance)?
When you have competition you can only increase prices up to a certain degree to compensate for a lower volume of CPUs sold before facing diminishing returns, i.e. a slump in sales. Nvidia can increase prices beyond what they should normally afford because they currently have no competition at the high end - and they are not expected to have until the end of 2019. That's not the case with Intel, and if Zen 2 performs as well as it is expected to perform next year they will be forced to cut down prices further.
p.s. While you basically have a point about AMD64 being an expanded 32 to 64 bit ISA rather than a native 64 bit ISA (still, "non native 64 bit" does not equate with "fake 64 bit" ), I strongly doubt that there was a 3% decrease in performance from the switch to 64 bit. While the performance increase was not large, there *was* an increase in performance, particularly for programs with a large memory footprint.
Yes, x86 is very bloated while ARM is a quite cleaner ISA. I would welcome high performing ARM CPUs for laptops (or even desktops) because that long running x86 monopoly needs to end. We need real choice, and not the "choice" of Qualcomm SoC based laptops forcefully paired with Windows. I don't care about compatibility with Windows programs because I use Ubuntu Linux, which has excellent ARM support, so I would run most apps and programs natively. So, 2019, bring us more powerful ARM SoCs for laptops!
The "fake 64-bit" rant is definitely just trolling or a bad case of misinformation. Bigger registers (ie. the thing that actually makes it 64-bit) help certain applications, and they don't help others. This is the same effect on any architecture. If you do math-intensive stuff, 64-bit is going to be significantly faster.
It also makes no difference if x86 registers can be sliced into smaller ones. These days the native register size is 64-bit, that you can also address it as 32-bit or run 32-bit code on it does not really impact 64-bit execution performance. Its not like there are two 32-bit registers that are somehow inefficiently combined, or some nonsense like that.
Certainly the x86 instruction set is full of clutter from the 30 or so years it has existed, and many people would be happy to declutter it, but thats not going to happen.
But at the same time, there is no real replacement architecture available yet either which can provide the same level of performance in all markets, even ignoring software optimizations from decades of development. Maybe ARM will eventually produce high-end desktop and server grade chips, but thats still far off.
"If you do math-intensive stuff, 64-bit is going to be significantly faster."
Math "stuff" is done on FP, which were 80 bit since 8087 (end of 1970s). 64-bit integer registers don't help, 64-bit pointers take more memory, cache space, bus throughput etc - all disadvantages. If you don't need more that 4GB of memory, 64-bit is worse than useless. Any performance increase from x64 was because of twice as many architectural registers and some new commands available.
Have you ever look up the difference between RISC vs CISC. Basically RISC uses simpler instructions - but this means it take to execute more instruction to handle same thing as CISC instructions. This used to advantage for RISC - by being able to execute more at same time - but CISC designers have found ways to have CISC code microcode be like RISC. So saying that x86 is bloated is wrong - RISC code takes up more storage but this is hidden by more memory and also designed of phone / tablet apps
Also depending on Ubuntu Linux for main stream computers is crazy. It is more designed for Geeks and not the common folks. There is no x86 monopoly - this is false statement - you find ARM monopoly on tablets and phone - x86 try in that market with phones and it fail.
I don't believe ARM vs of Windows is an option, this is not actually about Windows - but should how inefficient that ARM is. Ubuntu Linux is not the solutions as I stated above. It not made for the common public. One thing I am surprise about why did manufacture make a ARM version of Chromebook. Which sounds like a better fit.
One thing which I know Microsoft hates and would love every one to switch to universal apps, is that Native Win32/Win64 apps are here to stay, It will probably be a decade before they are gone. As for Win64,. it is simple architexture extension of Win32 and is natural evolution of x86 cpu designed - even if AMD would not step in - Intel had it in drawing books. As far Win128 or Win256, I not sure if that was needed - but in days that Win64 first came out - most people thought that way. Larger instructions do bloat applications because it takes up more bytes. Intel is smart in going way AVX 2 256 bit and AVX512 - by using larger instructions sets where they needed and not just for common memory. But who knows one day memory technology will justified using more that 64bits of address space
Oh trust me the iGPU is still in there taking up room and most likely power but Intel has just disabled it. They would not just make a new die for a few models that are basically the same as currently release dmodels minus iGPU support only.
Hopefully they learn that lesson before they tape out 7nm parts, if 14nm supply problems are bad now, just wait for the 7nm supply crunch when yield ramp is still underway.
The supply problem is only because Intel only has about 4 14nm lines. Intel did not plan for double core count = they can produce 50% fewer CPUs. It's that simple if you understand math. Xeons have no iGPU and instead used that area for more CPUs. Intel could have released this for consumers 12 years ago, but why would they when they can charge 10K for server chips that cost less to produce than a Nvidia 2080 GPU. (because people complain about "expensive" Nvidia GPUs not thinking about 500+ mm2 12nm actually cost a lot to produce. At least 6 times more than 8 core Ryzen)
The supply problem is two fold really. Making bigger CPUs is one part of it, but the extremely long lifetime of 14nm is the second part. Usually you would have part of the product stack on the next node, and other parts on the last one (like chipsets, or low-end CPUs). But the next node didn't arrive yet, and the previous-node-products already caught up to the current node - so everything wants to be 14nm, further eating away capacity.
I think the real complaint with turing is that rtx and dlss are eseentially gimmicks. One trace per pixel is never going to look good especially when you consider a performance impact. dlss is essentially the same quality as upscaled resolution to 4k with worse performance. And both of these take die space that could have been used for real performance. The kicker is that nvidia HAD to go bigger to fit these gimmicks on the cards otherwise the performance would be WORSE than pervious generations. So all that extra cost of die space could have been avoided. And this comes from NVIDIA. I was predicting this BEFORE the chips launched because ANYONE with experience with graphics programming and ray tracing and deel learning could have told you that this was the expected result. I used to love nvidia but right now Im hoping for navi. Id like to build a new pc without paying dramatically worse price/performance ratio.
Remember how Steve Jobs demanded an intel CPU without GPU in 2008? Apple refused for years to move from Core2 CPUs so that their products could have good mainstream GPUs on motherboards. So why hasn't Intel removed iGPU? Its takes 50%-80% of the die area. 1) Remember that the idea was that iGPU would be X86 Larrabee cores. Imagine if this came thru and we could use these cores to power apps in OS?? 2) AMD fanbois do not understand this: The ASP of a PC is 400 dollars. That is GPU. That's why AMD during 2017 just gained 1% market share. 3) Intel does not care about mainstream desktops. They love to charge 10K for CPUs on servers after killing off the competition by subsidizing CPUs. Back in 2006, an MP Xeon cost 300 dollars. Now its 1K at least, (back in 2006 Unix still had over 50% revenue in servers. X86 is not everything that exists. And SPARC/HP/Alpha and so on died because they charged 4500 dollars for a CPU while Xeon did cost 300 dollars. Now Intel abuse this non-competition. If Intel cared about mainstream/high-end "gaming". Imagine a 6-8 core intel without iGPU, but instead having 256meg eDRAM? Intel 5775c is still way faster per clock today with eDRAM than anything else Intel has. But the problem for Intel is this: How to explain that a mainstream eDRAM CPU is faster than their Xeons while Xeons cost 50%-500% more. The best for us all is that MSFT recomplies to ARM so we get real 64bit CPUs. A12 is 40% faster than intel per clock. (so why is ARM slower in some apps? Well. The big perforemce jumps we seen in X86 last 12 years are actually AVX256/512. So of course optimized apps will be faster, just like a Powerbook 667mhz crushed the fastest PC in 2002 AMD 1.5ghz 10 times faster in coding DVD/MPEG2. Apple had Altivec = why insanely faster in media applications. Intel is not good for us. Remember that a 4 core high-end CPU costs under 7 dollar to manufacture. The prices we see today is not normal. Even AMD has over 40% margin (compared to Evil Apple greedy 29% or Good MSFT that have 95% margin in Windows/Office division. Its fun being unbiased like most fanboys)
It surprising how much truth you state above. But there is couple of things I am not sure of
1. Apple demanding intel to remove iGPU - backing in the older day I remember with iGPU and my desktop lost it GPU - I had to rush to get another one - at least when iGPU came around you had a backup to work things out.
2. I don't remember Lanabee cards at all - I must have miss that time in history - or just didn't care
3. ARM vs x86 - well it depends what application, if recompile for apps and such - it probably does not matter - but real desktop apps don't use basic apps. I would prefer my CS 5 version of Photoshop to subscription versions any day
4. as for Xeon vs desktop CPU's - my understanding of Xeons is compare to desktop CPU, si that tXeon has better IO on system, my Dual Xeon 5150 was faster than any computer sold at BestBuy for many years - only when Skylake come out did I see a big difference. If was not for supid Audio IO on my Supermicro - it could still be used today.
You are absolutely correct Intel does not care about desktops - it is small percentage of computers compare to mobile.
I believe this is a win-win situation for custoimers in long run - Intel iscoming back iand re-investing to make sure there line is competitive - they are being attack by ARM on low end and AMD on high end.
But big question how much power, how many cores does the average customer need. Also how power you need in GPU to run word processors and spreadsheet6s, I and most people on this website are not average customer. My sister leads a manufacturing company that her husband created and has Apple iPad 3 and see no need of upgrading. This is industry biggest problems - current laptop and tablets are good enough for most people. Only hard core gamers need the latest and greatest. I just found out my Sister's husband got a New HP laptop with Mobile Xeon for running SolidWorks and I curious why he didn't - I think possibly they did I t because new Solidworks supports the ABX512.
That's really the only viable solution these days if you want a decent PC. I'm using a Bay Trail laptop as my primary PC (old HP Stream 11) so the passive cooling and low TDP have totally spoiled me when it comes to getting a good mix of high compute performance and low heat output. The system is not without flaws, but I much prefer using a cooler and quieter system over some hot and loud 95W part in an obsolete desktop form factor. I do keep a couple of other laptops around for heavy lifting. My video production system is a Sandy Bridge 13 inch Dell Latitude which I think has a 35W TDP which is an uncomfortable change from a fully passive Bay Trail even if it is a tad faster and has more RAM.
Yeah, that sucks even more about modern Intel chips. The fact that they can hit a peak TDP nearly double the rated 95W is highly disturbing. I need a modern PC to use less power than a standard LED bulb (~9W) under moderate to heavy workloads rather than 20x more energy. Power around here averages 6.6 US cents per kilowatt hour so it adds up quickly when you start demanding 150W at the wall for word processing or fetching e-mail. I can't even imagine gaming on modern PC these days. That's what Android is for...well that and handling phone calls.
200W is excessive power demand for something as trivial as killing time. The original Game Boy Advance was released in 2001 or so and ran for 15 hours on two AA batteries that contained roughly 2500 mAh so like 7.5 Watt-hours and the end state, an amused person, was the same as can be achieved with a modern desktop PC. Sure there are considerable differences in the hardware, but with the same goal ultimately reached in 2001 for less than 4% of the energy cost, it makes something that eats as much power as a desktop computer a shameful waste in both power consumption, raw material weight, manufacturing need, and cost to the end user. Using modern rechargable batteries and contemporary processor manufacturing technologies would likely permit a backlit screen and a significant increase in processing power within the GBA's power envelope. Yet here we sit trying to justify products like AMD's Vega, Nvidia's RTX series and the 95W rated TDP of an Intel CPU. It's a disappointment to say the least.
You would need either a discrete GPU or integrated GPU to do a typical Windows install. (Not completely sure, but one could probably figure a way to do an install without a monitor if you are a sysadmin and doing bulk deployment or imaging, but I'm talking about typical home users on single PCs.)
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
60 Comments
Back to Article
CajunArson - Friday, December 28, 2018 - link
When they finally make a 9900KLF version they can launch it at 3 AM.Exodite - Friday, December 28, 2018 - link
At least we can be certain that it would rock (you)!Reflex - Friday, December 28, 2018 - link
Excellent reference. Blows me away how many people don't remember The KLF despite how dominant they were.bananaforscale - Friday, December 28, 2018 - link
All aboard the hype train to Trancentral!bug77 - Saturday, December 29, 2018 - link
3am GMT or Mumu land time?@Reflex Not _that_ dominant, they only had one album that I know of. Still, that one album is better than the entire works of Britney Spears and Taylor Swift. Together.
leexgx - Sunday, December 30, 2018 - link
Please stop it with your stupid videos in between your own articles it's causing my Bluetooth headset to stop audio coming from my other phoneDanNeely - Monday, December 31, 2018 - link
The cancer is from the site's corporate overlord. Ryan, Ian, etc have no control over it. The only defence we have it to use adblock and explicitly block the videos.HardwareDufus - Wednesday, January 2, 2019 - link
I shot milk out of my nose. Thank you for the moment of Levity.... 3 AM Eternal.mooninite - Friday, December 28, 2018 - link
An 8 core intel CPU for only $374? This would have been impossible if AMD hadn't released Ryzen.Cellar Door - Saturday, December 29, 2018 - link
Still overpriced.Opencg - Saturday, December 29, 2018 - link
Not for gamers. If it's anything like the 9700k it will offer equal or better performance as the 9900k at a dramatically reduced price. This is the first time there has actually been a value buy at the very top end of gaming performance. It's funny you whine about it being overpriced. Shows how much you know.imaheadcase - Saturday, December 29, 2018 - link
Actually the price won't be much less maybe $25-50 off it sure, but the real kicker is the core speed should be a lot higher because no gpu onboard to add the excess heat and power draw.They should of did this at the start, most people would agree that its kind of pointless to put a gpu on higher end CPU, when the majority are sold with gpu on lower CPU anyway.
MrSpadge - Saturday, December 29, 2018 - link
Nope: If you don't use the iGPU it doesn't use any power.Opencg - Sunday, December 30, 2018 - link
You say "actually" like you understood what I wrote. Please reread dumbass.imaheadcase - Sunday, December 30, 2018 - link
Ok, same answer.Opencg - Monday, December 31, 2018 - link
Ok well somehow you read twice and missed the fact that i was comparing the 9700k to the 9900k. Which is why I clearly typed 9700k and 9900k. The difference is not 25-50$. Congrats man. You should probably just kill yourself if you are that dumb.piiman - Sunday, December 30, 2018 - link
Why do you think they are value priced? They haven't even been launched and there is no pricing info. Even so I'd wouldn't expect them to be too much less, I mean how much does the GPU part really cost Intel? They may be 25.00 less if your lucky, IMO but I of course could be wrong :)imaheadcase - Sunday, December 30, 2018 - link
You just saying what i said, i said $25-50. :POpencg - Monday, December 31, 2018 - link
They are becasue the 9700k is value priced the f variant would have to cost way more for it to not be value priced Clearly you guys just cant read and comprehend though. Did I type an F? No your brain synthed it from nowhere. 000bug77 - Saturday, December 29, 2018 - link
There were HT parts at that price. Depending on what you did, they could perform just like a true 8 core or much worse.Either way, core is the new MHz these days...
HStewart - Saturday, December 29, 2018 - link
I was curious why some models have no hyperthreading, but 8 cores without HT.Just maybe people on finding out you really don't need that many cores and that it better to have faster cores instead.
CPU's manufactures have gotten sloppy - instead improving the architexture - they just add more cores - I exited about Sunny Cove, because this means architecture is independent of Node process.
Now also Memory, more memory now a days means developers ( which I am one ), can be sloppy in their programming - in stead count clock cycles they and optimizing access techniques - they can load data structures in memory and just depend on more memory.
bug77 - Monday, December 31, 2018 - link
In the old days, there was just one dimension to CPUs: the speed of a single core. Back then, it was possible to order CPUs from slowest to fastest.Now the problem has more dimensions: core count and even boost clocks (and how long they can be sustained). Today it's no longer possible to define a single metric by which one CPU is faster than another. It all depends on your typical workload.
And yes, as you suspected, many workloads don't actually need that many physical cores.
imaheadcase - Saturday, December 29, 2018 - link
Right /rolls eyes.FullmetalTitan - Friday, December 28, 2018 - link
One might be forgiven for thinking Intel would have launched this lineup without iGPU from the start, as that would potentially mean a smaller die, and therefore better yields/more die per wafer. But this is not exactly Intel's best few years of thinking ahead or adapting, so it makes sense they would completely miss the low hanging fruit solution to some of their supply issues, a solution AMD has been using already with high end parts that are EXPECTED to be paired with a discrete graphics device.FullmetalTitan - Friday, December 28, 2018 - link
It is also possible, of course, that these are simply salvaged die from the original design where defects were restricted to the iGPU portion of silicon, and therefore just have the block disabled. That consideration aside, my point stands that Intel missed an option to ease their supply woes months earlierDanNeely - Friday, December 28, 2018 - link
These're almost certainly die salvages not new design. I'll be interesting to see what availability looks like, depending on how low it is they might not've done it sooner because it took until now to have a worthwhile number of dead GPU chips to create the SKUs. Unlike the smaller mobile chips, the big desktop one's don't have a lot of GPU area on the die; so CPUs with good compute cores, cache, etc, but a busted GPU are probably relatively rare.shompa - Saturday, December 29, 2018 - link
Die salvages have for 4 years been i5/i3. That's why these "salvages" are just marginally cheaper than the full version. Intel did not throw away these CPUs before, just branded differently in the fake segmentation that Intel does.Intel's supply problem is not like the narrative says "unprecedented demand". The real story is that Intel has 2-3 14nm fabs that are at 100% capacity. When Intel starts adding more cores/die area = they actually manufacture less complete chips to sell. Intel thought 10nm would be online so there is simply no fab space when Intel added more cores to "compete" with AMD. The funny thing with "shortages" is also that Intel can charge more for their CPUs, so why should they care about producing more? (if we instead used ARMs we would complain about 8 core CPUs that cost 25-50 dollars. But somehow people are brainwashed that X86 prices are "normal" and that X86 is the "fastest", something it never has been. It's not even real 64bit, something RISC was in 1990. heck, you can't even run 32bit code on A SoCs today since there is no 32bit. Good luck trying to do this on X86. (you cant remove the 32bit registers since 64bit are extensions. Fake 64bit = why X86 is the ONLY CPU that did not show performance increase going from 32 to 64bit. Instead 3% decrease. Compile the same app in 32bit/64bit and run on A7 and you see the usual 30-40% speed increase going to 64bit. We REALLY need to move away from X86...
Santoval - Saturday, December 29, 2018 - link
"The funny thing with "shortages" is also that Intel can charge more for their CPUs, so why should they care about producing more?"Er, maybe because increasing prices further makes them even less competitive now that AMD is breathing down their neck (and soon, with Zen 2 based CPUs, AMD will almost certainly pull ahead of them even in single core performance)?
When you have competition you can only increase prices up to a certain degree to compensate for a lower volume of CPUs sold before facing diminishing returns, i.e. a slump in sales. Nvidia can increase prices beyond what they should normally afford because they currently have no competition at the high end - and they are not expected to have until the end of 2019.
That's not the case with Intel, and if Zen 2 performs as well as it is expected to perform next year they will be forced to cut down prices further.
Santoval - Saturday, December 29, 2018 - link
p.s. While you basically have a point about AMD64 being an expanded 32 to 64 bit ISA rather than a native 64 bit ISA (still, "non native 64 bit" does not equate with "fake 64 bit" ), I strongly doubt that there was a 3% decrease in performance from the switch to 64 bit. While the performance increase was not large, there *was* an increase in performance, particularly for programs with a large memory footprint.Yes, x86 is very bloated while ARM is a quite cleaner ISA. I would welcome high performing ARM CPUs for laptops (or even desktops) because that long running x86 monopoly needs to end. We need real choice, and not the "choice" of Qualcomm SoC based laptops forcefully paired with Windows. I don't care about compatibility with Windows programs because I use Ubuntu Linux, which has excellent ARM support, so I would run most apps and programs natively. So, 2019, bring us more powerful ARM SoCs for laptops!
nevcairiel - Saturday, December 29, 2018 - link
The "fake 64-bit" rant is definitely just trolling or a bad case of misinformation.Bigger registers (ie. the thing that actually makes it 64-bit) help certain applications, and they don't help others. This is the same effect on any architecture. If you do math-intensive stuff, 64-bit is going to be significantly faster.
It also makes no difference if x86 registers can be sliced into smaller ones. These days the native register size is 64-bit, that you can also address it as 32-bit or run 32-bit code on it does not really impact 64-bit execution performance. Its not like there are two 32-bit registers that are somehow inefficiently combined, or some nonsense like that.
Certainly the x86 instruction set is full of clutter from the 30 or so years it has existed, and many people would be happy to declutter it, but thats not going to happen.
But at the same time, there is no real replacement architecture available yet either which can provide the same level of performance in all markets, even ignoring software optimizations from decades of development. Maybe ARM will eventually produce high-end desktop and server grade chips, but thats still far off.
nevcairiel - Saturday, December 29, 2018 - link
PS:Unless I'm mistaken, ARM 64-bit cores can also run ARM 32-bit code. Clearly the architecture is also a failure, right?
peevee - Thursday, January 3, 2019 - link
"If you do math-intensive stuff, 64-bit is going to be significantly faster."Math "stuff" is done on FP, which were 80 bit since 8087 (end of 1970s).
64-bit integer registers don't help, 64-bit pointers take more memory, cache space, bus throughput etc - all disadvantages. If you don't need more that 4GB of memory, 64-bit is worse than useless. Any performance increase from x64 was because of twice as many architectural registers and some new commands available.
HStewart - Saturday, December 29, 2018 - link
Have you ever look up the difference between RISC vs CISC. Basically RISC uses simpler instructions - but this means it take to execute more instruction to handle same thing as CISC instructions. This used to advantage for RISC - by being able to execute more at same time - but CISC designers have found ways to have CISC code microcode be like RISC. So saying that x86 is bloated is wrong - RISC code takes up more storage but this is hidden by more memory and also designed of phone / tablet appsAlso depending on Ubuntu Linux for main stream computers is crazy. It is more designed for Geeks and not the common folks. There is no x86 monopoly - this is false statement - you find ARM monopoly on tablets and phone - x86 try in that market with phones and it fail.
I don't believe ARM vs of Windows is an option, this is not actually about Windows - but should how inefficient that ARM is. Ubuntu Linux is not the solutions as I stated above. It not made for the common public. One thing I am surprise about why did manufacture make a ARM version of Chromebook. Which sounds like a better fit.
HStewart - Saturday, December 29, 2018 - link
One thing which I know Microsoft hates and would love every one to switch to universal apps, is that Native Win32/Win64 apps are here to stay, It will probably be a decade before they are gone.As for Win64,. it is simple architexture extension of Win32 and is natural evolution of x86 cpu designed - even if AMD would not step in - Intel had it in drawing books. As far Win128 or Win256, I not sure if that was needed - but in days that Win64 first came out - most people thought that way. Larger instructions do bloat applications because it takes up more bytes. Intel is smart in going way AVX 2 256 bit and AVX512 - by using larger instructions sets where they needed and not just for common memory. But who knows one day memory technology will justified using more that 64bits of address space
peevee - Thursday, January 3, 2019 - link
"Fake 64bit"shompa, you are SO delusional. Wow.
rocky12345 - Friday, December 28, 2018 - link
Oh trust me the iGPU is still in there taking up room and most likely power but Intel has just disabled it. They would not just make a new die for a few models that are basically the same as currently release dmodels minus iGPU support only.FullmetalTitan - Friday, December 28, 2018 - link
Hopefully they learn that lesson before they tape out 7nm parts, if 14nm supply problems are bad now, just wait for the 7nm supply crunch when yield ramp is still underway.shompa - Saturday, December 29, 2018 - link
The supply problem is only because Intel only has about 4 14nm lines. Intel did not plan for double core count = they can produce 50% fewer CPUs. It's that simple if you understand math. Xeons have no iGPU and instead used that area for more CPUs. Intel could have released this for consumers 12 years ago, but why would they when they can charge 10K for server chips that cost less to produce than a Nvidia 2080 GPU. (because people complain about "expensive" Nvidia GPUs not thinking about 500+ mm2 12nm actually cost a lot to produce. At least 6 times more than 8 core Ryzen)nevcairiel - Saturday, December 29, 2018 - link
The supply problem is two fold really. Making bigger CPUs is one part of it, but the extremely long lifetime of 14nm is the second part. Usually you would have part of the product stack on the next node, and other parts on the last one (like chipsets, or low-end CPUs). But the next node didn't arrive yet, and the previous-node-products already caught up to the current node - so everything wants to be 14nm, further eating away capacity.Opencg - Saturday, December 29, 2018 - link
I think the real complaint with turing is that rtx and dlss are eseentially gimmicks. One trace per pixel is never going to look good especially when you consider a performance impact. dlss is essentially the same quality as upscaled resolution to 4k with worse performance. And both of these take die space that could have been used for real performance. The kicker is that nvidia HAD to go bigger to fit these gimmicks on the cards otherwise the performance would be WORSE than pervious generations. So all that extra cost of die space could have been avoided. And this comes from NVIDIA. I was predicting this BEFORE the chips launched because ANYONE with experience with graphics programming and ray tracing and deel learning could have told you that this was the expected result. I used to love nvidia but right now Im hoping for navi. Id like to build a new pc without paying dramatically worse price/performance ratio.Santoval - Saturday, December 29, 2018 - link
If it's disabled why would it draw power?shompa - Saturday, December 29, 2018 - link
Remember how Steve Jobs demanded an intel CPU without GPU in 2008? Apple refused for years to move from Core2 CPUs so that their products could have good mainstream GPUs on motherboards. So why hasn't Intel removed iGPU? Its takes 50%-80% of the die area. 1) Remember that the idea was that iGPU would be X86 Larrabee cores. Imagine if this came thru and we could use these cores to power apps in OS?? 2) AMD fanbois do not understand this: The ASP of a PC is 400 dollars. That is GPU. That's why AMD during 2017 just gained 1% market share. 3) Intel does not care about mainstream desktops. They love to charge 10K for CPUs on servers after killing off the competition by subsidizing CPUs. Back in 2006, an MP Xeon cost 300 dollars. Now its 1K at least, (back in 2006 Unix still had over 50% revenue in servers. X86 is not everything that exists. And SPARC/HP/Alpha and so on died because they charged 4500 dollars for a CPU while Xeon did cost 300 dollars. Now Intel abuse this non-competition. If Intel cared about mainstream/high-end "gaming". Imagine a 6-8 core intel without iGPU, but instead having 256meg eDRAM? Intel 5775c is still way faster per clock today with eDRAM than anything else Intel has. But the problem for Intel is this: How to explain that a mainstream eDRAM CPU is faster than their Xeons while Xeons cost 50%-500% more. The best for us all is that MSFT recomplies to ARM so we get real 64bit CPUs. A12 is 40% faster than intel per clock. (so why is ARM slower in some apps? Well. The big perforemce jumps we seen in X86 last 12 years are actually AVX256/512. So of course optimized apps will be faster, just like a Powerbook 667mhz crushed the fastest PC in 2002 AMD 1.5ghz 10 times faster in coding DVD/MPEG2. Apple had Altivec = why insanely faster in media applications. Intel is not good for us. Remember that a 4 core high-end CPU costs under 7 dollar to manufacture. The prices we see today is not normal. Even AMD has over 40% margin (compared to Evil Apple greedy 29% or Good MSFT that have 95% margin in Windows/Office division. Its fun being unbiased like most fanboys)HStewart - Saturday, December 29, 2018 - link
It surprising how much truth you state above. But there is couple of things I am not sure of1. Apple demanding intel to remove iGPU - backing in the older day I remember with iGPU and my desktop lost it GPU - I had to rush to get another one - at least when iGPU came around you had a backup to work things out.
2. I don't remember Lanabee cards at all - I must have miss that time in history - or just didn't care
3. ARM vs x86 - well it depends what application, if recompile for apps and such - it probably does not matter - but real desktop apps don't use basic apps. I would prefer my CS 5 version of Photoshop to subscription versions any day
4. as for Xeon vs desktop CPU's - my understanding of Xeons is compare to desktop CPU, si that tXeon has better IO on system, my Dual Xeon 5150 was faster than any computer sold at BestBuy for many years - only when Skylake come out did I see a big difference. If was not for supid Audio IO on my Supermicro - it could still be used today.
You are absolutely correct Intel does not care about desktops - it is small percentage of computers compare to mobile.
I believe this is a win-win situation for custoimers in long run - Intel iscoming back iand re-investing to make sure there line is competitive - they are being attack by ARM on low end and AMD on high end.
But big question how much power, how many cores does the average customer need. Also how power you need in GPU to run word processors and spreadsheet6s, I and most people on this website are not average customer. My sister leads a manufacturing company that her husband created and has Apple iPad 3 and see no need of upgrading. This is industry biggest problems - current laptop and tablets are good enough for most people. Only hard core gamers need the latest and greatest. I just found out my Sister's husband got a New HP laptop with Mobile Xeon for running SolidWorks and I curious why he didn't - I think possibly they did I t because new Solidworks supports the ABX512.
PeachNCream - Friday, December 28, 2018 - link
As always, the TDP is too high.GreenReaper - Friday, December 28, 2018 - link
You could use it less, I guess? Or pick another CPU. These are high-frequency, high-cache parts. They're going to be expensive, power-wise.If you want lots of lower-powered cores, try an Atom? Don't need so many cores? Celeron or Pentium.
PeachNCream - Saturday, December 29, 2018 - link
That's really the only viable solution these days if you want a decent PC. I'm using a Bay Trail laptop as my primary PC (old HP Stream 11) so the passive cooling and low TDP have totally spoiled me when it comes to getting a good mix of high compute performance and low heat output. The system is not without flaws, but I much prefer using a cooler and quieter system over some hot and loud 95W part in an obsolete desktop form factor. I do keep a couple of other laptops around for heavy lifting. My video production system is a Sandy Bridge 13 inch Dell Latitude which I think has a 35W TDP which is an uncomfortable change from a fully passive Bay Trail even if it is a tad faster and has more RAM.Peter2k - Saturday, December 29, 2018 - link
Wait until you realize that Intel's TDP is at stock clocks without any boost :-)95w is nothing
PeachNCream - Saturday, December 29, 2018 - link
Yeah, that sucks even more about modern Intel chips. The fact that they can hit a peak TDP nearly double the rated 95W is highly disturbing. I need a modern PC to use less power than a standard LED bulb (~9W) under moderate to heavy workloads rather than 20x more energy. Power around here averages 6.6 US cents per kilowatt hour so it adds up quickly when you start demanding 150W at the wall for word processing or fetching e-mail. I can't even imagine gaming on modern PC these days. That's what Android is for...well that and handling phone calls.RSAUser - Monday, December 31, 2018 - link
95W is at stock clocks, if you're just using spreadsheets it will click lower and draw less power.On my machines I use frame caps at around 70 for 60Hz screens, I get draw less power and my fans usually don't turn on.
But those 6c/kWh is not much, full system gaming is about 200-220W for most, so 8 hours for 6c.
us - Monday, December 31, 2018 - link
oops, replied to the wrong comment.have you seen what excel does these days as far as cpu usage? it's a cpu hog now.
PeachNCream - Wednesday, January 2, 2019 - link
200W is excessive power demand for something as trivial as killing time. The original Game Boy Advance was released in 2001 or so and ran for 15 hours on two AA batteries that contained roughly 2500 mAh so like 7.5 Watt-hours and the end state, an amused person, was the same as can be achieved with a modern desktop PC. Sure there are considerable differences in the hardware, but with the same goal ultimately reached in 2001 for less than 4% of the energy cost, it makes something that eats as much power as a desktop computer a shameful waste in both power consumption, raw material weight, manufacturing need, and cost to the end user. Using modern rechargable batteries and contemporary processor manufacturing technologies would likely permit a backlit screen and a significant increase in processing power within the GBA's power envelope. Yet here we sit trying to justify products like AMD's Vega, Nvidia's RTX series and the 95W rated TDP of an Intel CPU. It's a disappointment to say the least.us - Monday, December 31, 2018 - link
have you seen what excel does these days as far as cpu usage? it's a cpu hog now.nunya112 - Saturday, December 29, 2018 - link
haha imagine if they had KLF do 3am in an advert for it... KLF IS GANA ROCK YALLL. AHUH AHUHdromoxen - Sunday, December 30, 2018 - link
thishttps://www.youtube.com/watch?v=T5BUjl73ZFg&li...
iranterres - Monday, December 31, 2018 - link
Woah Intel's CPU line up is a mess at the moment. wtf.Lolimaster - Tuesday, January 1, 2019 - link
Panic mode, lets get more mhz at "95w tdp" to compete vs Zen2.jcc5169 - Wednesday, January 2, 2019 - link
Blah Blah Blah Intel still churning out the same old crapfouram33 - Wednesday, January 2, 2019 - link
Ok, simple question: do we need GPU to boot into desktop or when doing fresh install of Windows?jtd871 - Friday, January 4, 2019 - link
You would need either a discrete GPU or integrated GPU to do a typical Windows install. (Not completely sure, but one could probably figure a way to do an install without a monitor if you are a sysadmin and doing bulk deployment or imaging, but I'm talking about typical home users on single PCs.)