I would like to get "general" culture about what limits engineers in reducing the power and energy consumption to compute.
Please correct me if I am wrong but for power, I know that there are two parts in electronics consumptions:
$$P=P_d+P_s, \text{ with }P_d \propto f$$
Where \$P_d\$ is the dynamic consumption; it scales proportionally with the frequency of transistor switching, \$f\$, (hence the computer's clock speed), and \$P_s\$ is the static consumption (it is a constant).
Conclusion 1: going faster increases power consumption because of the dynamic consumption.
About energy, I would naïvely expect that the faster the computation goes, the lower the energy it consumes because the static consumption will vanish for a fast computer (while the dynamic consumption would be a constant given the fact \$P_d \propto f\$). An implicit assumption here is that whenever my computation has finished, I turn it off the computer to save energy (see comments for why I say so). However it seems the answer is more subtle (see after Eq. (2) in this paper). It says:
We will dissipate more energy if we switch faster
Hence, I am very confused.
Basically, I am looking for a pedagogic reference to someone unfamiliar with the domain to understand the general challenge to reduce computation's consumption (in power and energy). The ideal would be a technical book that has a pedagogical introduction to the domain (so that an non expert like me could understand, the fact it is a technical book would reassure me on the scientific validity). I am also interested in pedagogic answers here (but I would really like having a nice reference).
[edit]: If the consumption can be reduced by making some parameter (for instance a voltage drop) closer to 0, then I want to know what forbids engineer to do so. My overall question is really to understand the practical issue engineer face if they want to drastically reduce the consumption.