This is the third in a series of blog posts based on a series of articles by Rob Oshana and Mark Kraeling on “Optimizing Embedded Software for Power Efficiency” that ran in Embedded.com in May. In this “episode”, the focus is on “optimizing data flow and memory.” Here’s a quick recap of what’s covered in the article.
The authors point out that “memory-related functionality [with regard to DDR and SRAM memories] can be quite power-hungry.” That’s the not so good news. The good news is that “memory access and data paths can also be optimized to reduce power.”
The article tackles DDR first, and gives a pretty thorough primer on it, including a glossary, schematics, and list of the tasks that a DDR controller takes care of. It then gets into data flow optimization.
DDR consumes power in all states, even when the CKE (clock enable — enabling the DDR to perform any operations) is disabled, though this is minimal. One technique to minimize DDR power consumption is made available by some DDR controllers which have a power saving mode that de-asserts the CKE pin — greatly reducing power. In some cases, this is called Dynamic Power Management Mode, which can be enabled via the DDR_SDRAM_CFG[DYN_PWR] register. This feature will de-assert CKE when no memory refreshes or accesses are scheduled. If the DDR memory has self-refresh capabilities, then this power-saving mode can be prolonged as refreshes are not required from the DDR controller.
This power-saving mode does impact performance to some extent, as enabling CKE when a new access is scheduled adds a latency delay. (Source: Embedded.com)
It then gets into using tools to estimate power consumption, identifying those operations where the power draw is greatest – operations where software engineers need to focus their attention. There are a couple of specific optimization tips – by timing, with interleaving, with software data organization, with general DDR configuration, for DDR burst accesses.
They then get into SRAM-related optimization, with discussions of cache data flow optimization, RAM and code size, power consumption and parallelization, and a number of other topics.
As I’ve noted in my summaries of the other Oshana-Kraeling articles, they’re really worth going through fully, as they provide a pretty good mini-course.
The full series of articles is linked here:
Optimizing embedded software for power efficiency: Part 4 – Peripheral and algorithmic optimization
They are all excerpted from Oshana’s and Kaeling’s book, Software Engineering for Embedded Systems.