0
\$\begingroup\$

I would like to get "general" culture about what limits engineers in reducing the power and energy consumption to compute.

Please correct me if I am wrong but for power, I know that there are two parts in electronics consumptions:

$$P=P_d+P_s, \text{ with }P_d \propto f$$

Where \$P_d\$ is the dynamic consumption; it scales proportionally with the frequency of transistor switching, \$f\$, (hence the computer's clock speed), and \$P_s\$ is the static consumption (it is a constant).

Conclusion 1: going faster increases power consumption because of the dynamic consumption.

About energy, I would naïvely expect that the faster the computation goes, the lower the energy it consumes because the static consumption will vanish for a fast computer (while the dynamic consumption would be a constant given the fact \$P_d \propto f\$). An implicit assumption here is that whenever my computation has finished, I turn it off the computer to save energy (see comments for why I say so). However it seems the answer is more subtle (see after Eq. (2) in this paper). It says:

We will dissipate more energy if we switch faster

Hence, I am very confused.

Basically, I am looking for a pedagogic reference to someone unfamiliar with the domain to understand the general challenge to reduce computation's consumption (in power and energy). The ideal would be a technical book that has a pedagogical introduction to the domain (so that an non expert like me could understand, the fact it is a technical book would reassure me on the scientific validity). I am also interested in pedagogic answers here (but I would really like having a nice reference).

[edit]: If the consumption can be reduced by making some parameter (for instance a voltage drop) closer to 0, then I want to know what forbids engineer to do so. My overall question is really to understand the practical issue engineer face if they want to drastically reduce the consumption.

\$\endgroup\$
7
  • 1
    \$\begingroup\$ About energy, I would naïvely expect that the faster the computer goes, the lower the energy it consumes because the static consumption will vanish for a fast computer - surely if you are doing a single calculation and turn the computer off, the faster you are doing it, the less energy (well, not exactly, but there might be an optimum point) you are going to use. But the issue is that we don't turn the computer off after it is done one calculation. The power consumption is calculated over the same fixed time regardless of how many calculations it is making during it. \$\endgroup\$
    – Eugene Sh.
    Commented Apr 3 at 18:04
  • \$\begingroup\$ @EugeneSh. Why wouldn't we? If my goal is to compute a given task I will turn it off after it has been completed to save energy. This is an implicit assumption (now added in my question). \$\endgroup\$
    – StarBucK
    Commented Apr 3 at 18:08
  • \$\begingroup\$ @StarBucK Then you're probably looking for power consumption for a computation, not a computer, which is a subtly different question \$\endgroup\$
    – pipe
    Commented Apr 3 at 18:11
  • 1
    \$\begingroup\$ In this case, the faster your computer is, the higher is the dynamic consumption, but the time is shorter, reducing both dynamic and static consumption. So there is an optimum point where the consumption for the given task is minimal. \$\endgroup\$
    – Eugene Sh.
    Commented Apr 3 at 18:12
  • \$\begingroup\$ @pipe very good point. I updated my question. \$\endgroup\$
    – StarBucK
    Commented Apr 3 at 18:16

3 Answers 3

1
\$\begingroup\$

Dynamic consumption is reduced by reducing the number of nodes that change state (linear), by reducing the capacitance of each node (linear), and especially by reducing the supply voltage (proportional to square of voltage). Capacitance tends to drop as device size drops.

For a given technology, doing a calculation quickly or slowly is not much different in terms of total energy per calculation.

However, because of the way device physics works, decreasing the supply voltage tends to increase quiescent power consumption because leakage is higher.

There are limits to how small devices can be made and how thin insulators can be, both technical and economic. Those limits have historically tended to trend in a fairly predictable manner (Gordon Moore's law). But, as they say, past performance does not guarantee future results,

\$\endgroup\$
2
  • \$\begingroup\$ Thanks for your answer. But then what prevents someone to make the voltage drop as small as possible to reduce the maximum the consumption then? What would be the problem in doing so? This is the kind of thing I am looking for: what limits engineers to drastically reduce the consumption. \$\endgroup\$
    – StarBucK
    Commented Apr 3 at 18:54
  • \$\begingroup\$ Enormous amounts of money are expended to make smaller, faster lower voltage/power devices that can operate at high speed, are reliable and can be fabricated in mass quantity at an affordable cost. See, for example, FinFETs and Gate-All-Around transistors. Each generation generally requires more process steps, more precise equipment, more exotic materials such as halfnium and ever shorter light wavelengths. \$\endgroup\$ Commented Apr 3 at 19:13
1
\$\begingroup\$

To add to the existing answers, reliability is really the limiting factor; we seek to use the minimum amount of energy possible per bit or per bit wise operation, but will eventually have a bit energy so low that it’s near the noise floor of our equipment and so errors will occur more frequently. In most applications we prefer to spend more energy and get a reliable result and so the bit energy is well above the noise floor. In practice, changing the clock speed has little effect on power consumption since the static power consumption is much lower than the dynamic power.

\$\endgroup\$
2
  • \$\begingroup\$ This is exactly the kind of thing I am looking for (reliability becomes an issue if we look for minimum energy). Do you have a reference for this? It would be fantastic! \$\endgroup\$
    – StarBucK
    Commented Apr 3 at 19:14
  • \$\begingroup\$ Sorry not off the top of my head, it was something I learned at uni in the early 1980s \$\endgroup\$
    – Frog
    Commented Apr 3 at 19:15
0
\$\begingroup\$

Landauer's Principle sounds like what you're looking for. It's the thermodynamic minimum energy to erase one bit of information.

That's the information theoretic, ideal, minimum energy. When we run a practical computer, its actual implementation, in terms of bias currents, charging capacitive nodes etc, dissipates many, many orders of magnitude more energy than this, because our computational technology is still pretty stone-age.

We can try to optimise our computation for lower energy by reducing voltage swings, and the distances we send information to other cells (the capacitance we have to charge up), and by switching things off when there's nothing to do.

\$\endgroup\$
2
  • \$\begingroup\$ Thank you for your answer. Unfortunately Landauer erasure bound is a theoretical minimum which as you say is far from reality. I would like to know what is the limitation to reduce consumption is the current electronics devices. \$\endgroup\$
    – StarBucK
    Commented Apr 3 at 18:41
  • \$\begingroup\$ @StarBucK Once you've settled the thermodynamic minimum, all else is implementation inefficiencies - rather like the SHannon limit for communication. We are rather closer to that theoretical limit than we are to Landuaer. \$\endgroup\$
    – Neil_UK
    Commented Apr 3 at 19:46

Not the answer you're looking for? Browse other questions tagged or ask your own question.