Reuben George addressed this topic with respect to IoT apps in a recent piece on embedded.com. I’ll summarize Reuben’s article here.
He begins by outlining the three general categories that systems have fallen into: Always On (which are high performance, plugged in apps that draw on an uninterrupted power source); Battery Powered (at the other end of the continuum, these systems run off an on-board battery, and prize power consumption overall); and Battery-Backed Systems (somewhere in the middle, these systems are plugged in, but have battery backup if there’s a power failure; there’s probably a low-end one of these on your bedside table in the form of your clock radio).
But the times, they are a changing:
…a wide range of applications are migrating wired always-on devices to battery-backed or battery powered mobile versions. This new generation of devices — medical, handheld, consumer, communication, industrial – are all driven by the IoT. They are revolutionizing the way devices function and communicate. For such devices, neither the components designed for high performance nor those for low power can meet design requirements. High performance components have high current consumption and thus drain the battery too quickly. Low-power components are not fast enough to handle the demands of these complex devices. There is a need for devices that are both high performance and low power. This is especially critical for memory since a system is truly only as fast as its slowest component, which in many cases is the external memory. (Source: EE Times)
In response, semi manufacturers came up with high-speed MCUs that can drop into “deep sleep” when top performance isn’t needed.
However just optimizing the controller in IoT devices isn’t sufficient to meet their stringent power budgets. During low-power mode, peripherals and memory devices are also expected to save power. The onus of power management has now shifted to memory devices interfaced to such systems.
Reuben then goes on to talk about why SRAMs, despite their limitations – which he details – are used as a cache between MCU and storage memory, and makes the argument that the best approach to balancing the trade-off between performance and power is “Fast SRAM with on-chip power management, ensuring both high-performance and low-power.”
SRAMs with on-chip power management work in a similar way as MCUs with on-chip power management. In addition to active and standby modes of operation, there is a deep sleep mode of operation. Such a setup allows the SRAM to access data at full speed during its standard mode of operation. During the deep sleep mode of operation, the device doesn’t perform any functions and so can keep current consumption extremely low, on the order of 1000 times less than the standard standby consumption of Fast SRAMs.
This argument for Fast SRAMs is not surprising, given that Reuben works for Cypress, and Fast SRAM is an area they focus on. But his article is interesting and a good source for anyone curious about this approach to solving the power consumption-performance trade-off.
By the way, Critical Link is no stranger to implementing sleep mode as a means to save on battery life. Our Mity SOM-335x module’s idle power usage (prior to introducing sleep mode) was 792mW, which was still too much of a battery drain for a custom handheld medical product that we designed. Once sleep mode was introduced, we were able to bring non-operational power usage all the way down to 92mW, which obviously had a major positive impact on battery life.