16

Why are chips containing more and more cores? Why not manufacture a larger single-core processor? Is it easier to manufacture? Is it to allow programs to multithread using seperate cores?

3
  • The reason is mostly marketing. Most users won't benefit from multi-core, but it's being hyped as much better. It mostly makes sense for servers or power-users.
    – harrymc
    Commented Jun 13, 2010 at 15:00
  • Certainly there is hype, but there is also benefit. Most users these days can benefit from a mulit-core (i.e. typically dual-core) because most are using an OS that has multiple threads of execution. But for those still using Windows 95 or earlier I'd agree that multi-core is probably a complete waste of time. Commented Jun 13, 2010 at 15:29
  • at harrymc: "The reason is mostly marketing. Most users won't benefit from multi-core, but it's being hyped as much better. It mostly makes sense for servers or power-users." --- Those greedy snake oil salesmen...
    – Daniel
    Commented Dec 28, 2016 at 17:40

5 Answers 5

26

The trend towards multiple cores is an engineering approach that helps the CPU designers avoid the power consumption problem that came with ever increasing frequency scaling. As CPU speeds rose into the 3-4 Ghz range the amount of electrical power required to go faster started to become prohibitive. The technical reasons for this are complex but factors like heat losses and leakage current (power that simply passes through the circuitry without doing anything useful) both increase faster as frequencies rise. While it's certainly possible to build a 6 GHz general purpose x86 CPU, it's not proven economical to do so efficiently. That's why the move to multi-core started and it is why we will see that trend continue at least until the parallelization issues become insurmountable. At the moment the trend towards virtualization has helped in the server arena as that allows us to parallelize aggregate workloads efficiently, for the moment at any rate.

As a practical example the E5640 Xeon (4 cores @ 2.66 GHz) has a power envelope of 95 watts while the L5630 (4 Cores @ 2.13 GHz) requires only 40 watts. That's 137% more electrical power for 24% more CPU power for CPU's that are for the most part feature compatible. The X5677 pushes the speed up to 3.46 GHz with some more features but that's only 60% more processing power for 225% more electrical power.

Now compare the X5560 (2.8 GHz, 4 cores, 95 watts) with the newer X5660 (2.8 GHz, 6 cores, 95 watts) and there's 50% extra computing power in the socket (potentially, assuming that Amdahl's law being kind to us for now) without requiring any additional electrical power. AMD's 6100 series CPU's see a similar gains in aggregate performance over the 2400\8400 series while keeping electrical power consumption flat.

For single-threaded tasks this is a problem but if your requirements are to deliver large amounts of aggregate CPU power to a distributed processing cluster or a virtualization cluster then this is a reasonable approach. This means that for most server environments today scaling out the number of cores in each CPU is a much better approach than trying to build faster\better single core CPU's.

The trend will continue for a while but there are challenges and continually scaling out the number of cores is not easy (keeping memory bandwidth high enough and managing caches gets much harder as the number of cores grows). That means that the current fairly explosive growth in the number of cores per socket will have to slow down in a couple of generations and we will see some other approach.

4
  • 3
    I can't tell you how many times I've tried to explain this to people who still think a 3.6GHz CPU from 5 years ago is faster than a 2.8GHz CPU with the latest technology. It is infuriating. I hate the megahertz myth.
    – churnd
    Commented Jun 13, 2010 at 22:42
  • Isn't there also a physical limitation due to the speed of light for electrical signals as well?
    – mouche
    Commented Aug 4, 2010 at 21:04
  • 1
    @churnd - But do take into account that they are right in a way. For we must not mix speed with power (3,6 Ghz is undoubtadly faster than 2,8 Ghz; what it is not is more powerful). It can make a significant difference for programmers who need for exaxmple faster speeds yet are not yet proficient with threading/parallel programming techniques.
    – Rook
    Commented Aug 21, 2010 at 14:23
  • 4
    @ldigas Those programmers care about single-core instruction execution rates, not core clock speeds. Modern CPUs have much higher single-core instruction execution rates, even if the clock speed is lower. Commented Jan 28, 2014 at 2:46
4

The computing power and clock frequency of a single processor reached their peak a few years ago, it just isn't easy to create more powerful and/or faster processors than the current ones; so the major CPU manufacturers (Intel, AMD) switched strategy and went multi-core. This of course requires a lot more work from the application developers in order to harness the full power of multi-tasking: a program running on a single task just doesn't get any benefit from a multi-core CPU (although the system gets an overall bonus because it doesn't lock if a single process takes a single CPU to 100% usage).

About the physical architecture (multi-core processors instead of multiple single-core ones)... you should ask Intel. But I'm quite sure this has something to do with motherboards with a single CPU socket being a lot easier to design and manufacture than boards with multiple ones.

1
  • 2
    Exceedingly, I expect, we're going to be hearing more about Amdahl's law than Moore's law. Commented Jun 12, 2010 at 15:04
4

It was getting too hard to make them usefully faster.

The problem being, is that you need to be working on a bunch of instructions at once, current x86 cpu have 80 or more instructions being worked on at once, and it seems that is the limit, as it was hit with the P4, heck, the Pentium Pro did 40 in 1995. Typical instruction streams are not predictable beyond that (you have to guess branches, memory access, etc) to make execute more than a few instructions at once (486 did 5, Pentium did 10, barely).

So while you can make them wider (more functional units to do each piece of the instruction), longer (deeper pipelines to hide latency), it doesn't seem to do much good. And we seem to have hit a wall with clock speed as well. And we are still outrunning memory. So splitting into many cpu seems to be a win. Plus, they can share caches.

There is quite a bit more to this, but it boils down to conventional programs cannot be run significantly faster on any hardware we can imagine how to design and build.

Now if predictability isn't a problem, for example, many scientific problems and graphics (they often boil down to multiply this set of numbers by that set of numbers), this isn't the case, and thus the popularity of Intel's IA64 (Itanium) and GPUs, that just keep getting faster, but they will not help you run Word any better.

1

In order to increase clock speeds, the silicon transistors on the chip need to be able to switch faster. These higher speeds require higher input voltages and semiconductor manufacturing processes that result in greater leakage, both of which increase power consumption and heat output. You eventually reach a point where you cannot increase clock rates any further without requiring excessive amounts of power or using exotic cooling solutions.

To illustrate this problem, I'll compare two modern AMD processors. The AMD FX-9590 is capable of attaining clock speeds of up to 5 GHz out of the box, but operates at core voltages up to 1.912 V, which is extremely high for a 32nm chip, and dissipates an insane 220 watts of heat. The FX-8350, which is based on the same die, runs at a maximum of 4.2 GHz but operates at a maximum of 1.4 V and dissipates 125 watts.

As a result, instead of trying to increase clocks further, engineers have sought to make chips do more work faster in other ways, including designing them to run multiple processes simultaneously—hence multi-core processors.

0

Moore's law. Basically processors can't be made any faster (frequency hit 3 GHz 5 years ago and never went much over that), so they're made more powerful by getting more cores.

1
  • IMHO Moore's law is more of a description than a prediction... sure it held, and it still does, but nothing guarantees it won't break tomorrow. You just can't go to an engineer and tell him "you should be able to do this because moore's law says it can be done" when the physics won't allow it any more. Commented Jun 23, 2013 at 9:18

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .