Home / Blog / Embedded Computing for High Performance: Power & Energy Consumption

Embedded Computing for High Performance: Power & Energy Consumption

Over the past couple of months, we’ve been running a series summarizing the excerpts taken from Embedded Computing for High Performance by João Cardoso, José Gabriel Coutinho, and Pedro Diniz that embedded.com has published. The context for the book (and this series) is the growing importance and ubiquity of embedded computing in the age of the Internet of Things – a topic that seems to grow more important by the day. Earlier posts in this series focused on target architectures and multiprocessor and multicore architectures; core-based architectural enhancement and hardware accelerators; and performance. In this installment, we cover power and energy consumption.

The section on power and energy consumption offers information on techniques for reducing power and energy consumption. It’s pretty much the densest excerpt that we’ve seen to date, so you’re definitely advised to take a look at the entire section. But for a quick summary:

The authors set the stage by providing a number of formulae for power consumption (static power + dynamic power consumption), static power consumption, and dynamic power consumption. They then offer these tips:

The static power depends mainly on the area of the IC and can be decreased by disconnecting some parts of the circuit from the supply voltage and/or by reducing the supply voltage.

And, as dynamic power consumption “is proportional to the switching activity on the transitors in the IC”:

One way to reduce dynamic power is to make regions of the IC nonactive and/or to reduce supply voltage and/or clock frequency.

They then discuss ACPI (Advanced Configuration and Power Interface), which the operating system or applications use to manage power consumption.

Energy consumption is not directly associated with heat but affects battery usage. Thus by saving energy one is also extending the battery life or the length of the interval between battery recharges.

The authors devote next address the topic of dynamic voltage and frequency scaling.

Dynamic voltage and frequency scaling (DVFS) is a technique that aims at reducing the dynamic power consumption by dynamically adjusting  voltage and frequency of a CPU. This technique exploits the fact that CPUs have discrete frequency and voltage settings as previously described. These frequency/voltage settings depend on the CPU and it is common to have ten or less clock frequencies available as operating points. Changing the CPU to a frequency-voltage pair (also known as a CPU frequency/voltage state) is accomplished by sequentially stepping up or down through each adjacent pair. It is not common to allow a processor to make transitions between any two nonadjacent frequency/voltage pairs.

This section contains information on measuring power and energy, and on dark silicon.

We’ll be running one more installment in this series in a few weeks. After that, we’ll take a break, but will likely resume the series as embedded.com brings more excerpts of the book online.