Skip to main content

On the Road Again

Some of you (and some of us) may be taking road trip vacations with the family this summer, but Critical Link is hitting the road, starting in mid-August. Our travels will be taking us across the country, and even across the pond.

We have a fantastic lineup on our calendar:

RTECC(short for Real-Time & Embedded Computing Conference):We’ll be exhibiting at a couple of RTECC conferences:

  • Orange County, California (August 19)
  • San Diego, California (August 21)

And if you don’t think there’s any such thing as a free lunch, think again! RTECC events have no registration fee, and lunch is free. The Orange County conference will be held at the Richard Nixon Library & Museum in Yorba Linda, which folks of a certain age, and history buffs, might get a kick out of. The San Diego event will be at the more prosaic Crowne Plaza Hotel in Mission Valley.

There are several other RTECCs we may be participating in through the rest of the year. We’ll keep you posted.

ARM TechCon, which is going to be held in Santa Clara (at the Convention Center), October 1-3, is a show that we’ll be attending, but not exhibiting at. This is a pretty big event, and is aimed at helping hardware engineers and software developers work together to optimize their future ARM-based embedded products.

No rest for the weary I guess, so back on the East Coast the next week:

TI Tech Days (now TI Tech Summits) are something that we have very much enjoyed participating in. They offer a great opportunity to see TI’s latest embedded processors, and to check out what vendors like Critical Link are doing with them. The exhibits (and that all important lunch) are free.

So far, we know we’ll be exhibiting at these locations: (No info yet on specific venues.)

  • Anaheim, California (October 14)
  • Santa Barbara, California (October 15)

We may be adding others as well.

Come November, we’ll be brushing up on our German.

Vision Expo: We’ll be exhibiting at this event, “the world’s leading machine vision trade fair”, which will be held in Stuttgart, November 4-6, at the Messe Stuttgart. (In case you’re wondering, “messe” is German for “fair.”) Last time he was in Germany, Tom managed to sneak in a tour of the BMW Factory.

Arrow Centralized Training (also known as ACT) will be held in Denver, Colorado, from November 17 – 21, and Critical Link will be exhibiting and training Field Application Engineers on our SOMs and Imaging products. We participated in each of the last 2 ACTs, and can’t wait to see what Arrow has up its sleeve to top the tremendous value received from each of those events.

That’s what’s on the Critical Link calendar, for the rest of the year.

If you’re planning on attending any of these events, we’d love to have you stop by and say hello. If you’d like to schedule a time to meet (especially at ARM TechCon, where we’ll be attending but not exhibiting), please drop us a line and we’ll set something up.

 

OnSemi makes some moves

When it comes to image sensors, ON Semiconductor seems to be on a bit of an acquisition tear. (Acquisitions are nothing new for ON Semi, as you can see in this timeline.)

In April, the company announced its intent to acquire our Rochester NY neighbor, Truesense Imaging, a maker of CCD sensors. This deal was aimed at giving ON Semi more of a foothold in the high-end industrial market that Truesense supports. That deal went through in May.

But ON Semi had its eyes on a bigger prize. Last month, they accelerated their push into the industrial market, as well as into automotive (which is where a lot of the imaging action is these days), with the acquisition of Aptina, which is a pretty big player when it comes to CMOS technology.

These two acquisitions are coming right after ON Semi announced the first members of its new Python CMOS sensor family.

So they’re up to an awful  lot.

Vision systems are an area that I’m very interested in. (In a post I did in February, I combined my interest in vision systems with my interest in things-automotive, and wrote about driverless cars.) So I am quite intrigued about On Semi’s moves in this area. Among other things, it gives me an opportunity to put in a plug for Critical Link’s camera families, which are used in scientific imaging and vision applications. We have a rather extensive line of imaging products , which combine vision systems with our MityDSP and Mity SOM system on modules.  (You can find more about our imaging products here.)

Although we did use an Aptina sensor in our first generation development kit, we have not incorporated any ON Semi image sensors directly in our products. The sensors we use have primarily been from  BAE Fairchild, Hamamatsu, and e2V. However, customers with special sensor needs, or who want to work with sensors others than the ones we’ve chosen, can work with us to create custom cameras that incorporate a custom sensor board with our processor and I/O boards.

What all this activity on ON Semi’s part tells me is that imaging systems are going to be incorporated in more and more applications: industrial, scientific, security, defense, automotive – you name it. As both sensor and processing technology improves, we’ll be seeing a lot more complex, sophisticated applications out there.

It’s Official: Critical Link is ISO 9001:2008 certified

ISO 9001It’s official.

In late June, we received the official certificate that lets the world know that we’re ISO 9001:2008 certified.

What does this all mean?

First off, the International  Organization for Standards, a.k.a., ISO (even though that’s not the acronym for the organization in English, German, or French…):

…is the world’s largest developer of voluntary International Standards.

International Standards make things work. They give world-class specifications for products, services and good practice, to ensure quality, safety and efficiency. And because they are developed through global consensus, they help to break down barriers to international trade. (Source: ISO)

The standards that we chose to adopt are those set down in ISO 9001:2008, which establishes the criteria for an organization’s quality management system. To follow ISO 9001:2008 standards, you need to put into practice eight principles (shown below with our short hand definitions):

  • Customer focus (all about meeting your customers’ needs – and exceeding their expectations)
  • Leadership (i.e., the organization’s  leaders set and communicate clear objectives)
  • Involvement of people (employees at all levels are involved in making the organization succeed)
  • Process approach (a systematic – and measurable – way to get things done)
  • System approach to management (making sure that systems are in place, and integrated)
  • Continual improvement (always looking for ways to better the organization)
  • Factual approach to decision making (making sure that your information is correct, and acting on it)
  • Mutually beneficial supplier relationships (working with the bes t- and making sure the relationships are built to last)

Critical Link has always been a quality-focused organization, but we decided to formalize our approach, and adopt the ISO standards, for a number of reasons.

One, we really do believe in continual improvement, and adhering to the ISO standards makes us more improvement-conscious.

Two, these are international standards, and, as our customer base becomes more global, and our customers’ products more international, we felt that adopting an internationally established and recognized approach to quality was important.

Our quality focus,  is, of course, the most important driver here. The applications that our customers bring to market perform essential and complex tasks, and must be built to last. They take care of extremely critical, often life-and-death, functions in industrial settings like transportation, defense, medical, security, scientific, and manufacturing. They’re not throwaway apps that get tossed when the next best thing comes along.

Another thing that appealed to us is the focus on process, systems, and fact-based decision-making. I don’t know how other professions approach these things, but for engineers, it’s heaven!

It’s not enough to just follow the rules and declare yourself certified.

You need to get an objective, third party organization to conduct an audit and assert that you are, in fact, following the ISO standards.

For our certification, we chose SRI to conduct the audit.

So, as of late June, it’s official: Critical Link is ISO 9001:2008 certified, one of the few SOM manufacturers who have earned this achievement.

To say that I am immensely proud of the Critical Link team would be an understatement.

“Optimizing Embedded Software for Power Efficiency,” Part Four

This is my fourth and final post in a series of blog posts based on a series of articles by Rob Oshana and Mark Kraeling on “Optimizing Embedded Software for Power Efficiency” that ran in Embedded.com in May.  The focus of Part Four moves beyond a discussion of power utilization with respect to memory access to a discussion of peripheral and algorithmic optimization. For peripherals – for which the main communication forms “for embedded processors include DMA, SRIO, Ethernet, PCI Express, and RF antenna interfaces” –  the considerations are “burst size, speed grade, transfer width and general communication modes.”

 Although each protocol is different for the I/O peripherals and the internal DMA, they all share the fact that they are used to read/write data. As such, one basic goal is to maximize the throughput while the peripheral is active in order to maximize efficiency and the time the peripheral/device can be in a low-power state, thus minimizing the active clock times.

The most basic way to do this is to increase transfer/burst size. For DMA, the programmer has control over burst size and transfer size in addition to the start/end address (and can follow the alignment and memory accessing rules we discussed in earlier subsections of data path optimization). Using the DMA, the programmer can decide not only the alignment, but also the transfer “shape”, for lack of a better word. What this means is that using the DMA, the programmer can transfer blocks in the form of two-dimensional, three- dimensional, and four-dimensional data chunks, thus transferring data types specific to specific applications on the alignment chosen by the programmer without spending cycles transferring unnecessary data.  (Source: Embedded.com )

This section is followed by a look at “whether the core should move data from internal core memory or whether a DMA should be utilized in order to save power.” Other peripherals also come under discussion, with a good-sized section on I/O peripherals. There’s then a bit about polling – a no-no in terms of efficiency.

The article (and the series) concludes with a discussion on algorithmic power optimization which, according to Oshana and Kraeling, gives you the least power-savings bang for the work-required buck.

Algorithmic optimization includes optimization at the core application level, code structuring, data structuring (in some cases, this could be considered as data path optimization), data manipulation, and optimizing instruction selection.

There’s some info on the software pipelining technique (including code snippets), eliminating recursive procedure calls, and some advice on reducing accuracy. (Yes, you heard right: perfection can be the enemy of good.) Too much accuracy can overcomplicate things and suck up more cycles, without gaining you much of anything with respect to true precision. (Sort of like when kids carry out a division problem to a dozen decimal places…)

Overall, the Oshana-Kraeling series isn’t exactly beach reading, but I found it to be quite instructive.

 

—————————————————————————————————

The full series of articles is linked here:
Optimizing embedded software for power efficiency: Part 1 – Measuring power
Optimizing embedded software for power efficiency: Part 2 – Minimizing hardware power
Optimizing embedded software for power efficiency: Part 3 – Optimizing data flow and memory
Optimizing embedded software for power efficiency: Part 4 – Peripheral and algorithmic optimization

They are all excerpted from Oshana’s and Kaeling’s book, Software Engineering for Embedded Systems.

Why customers choose Critical Link (and not, say, BeagleBone)

Given the nature of the applications that our customers develop, we don’t typically run into competitive situations where Critical Link System on Modules (SoMs) are going head to head with products like BeagleBone. For those not familiar with BeagleBone, it’s open-source hardware, a single-board computer based on a TI SoC. For the most part, BeagleBone has been aimed at the academic market, tinkerers, prototypers – folks who aren’t bringing out solutions that actually make it into production. This is in pretty sharp contrast to our typical customer, who’s bringing a complex, high-value application to market. And our customers are typically in an industry – medical and scientific instrumentation, defense, transportation, manufacturing – where quality and product stability are paramount.

Nonetheless, we’re always interested in what BeagleBone’s up to, and a recent blog post by Jason Kridner, which discusses the production problems that BeagleBone’s run into with its relatively new BeagleBone Black, got me thinking about why our customers choose Critical Link.

Our SoMs are built for applications that matter, and we provide a very high level of support. So when our customers have problems, they bring them to our technical staff. We provide 100% support at the board level, and also provide quite a bit of help in other areas as well, such as with the processor and Linux questions, even though we aren’t necessarily obligated to.

We also pay a lot of attention to product longevity. Our customers aren’t building products  that get  redesigned every year. (I can’t imagine being a cell phone manufacturer) We understand that our customers’ products are going to be in production for long time – typically 10 to 15 years. We’ll support them throughout their product lifecycle, and won’t be forcing unwanted hardware revisions or software updates on them.

It’s also interesting to think about what happens when the commercial needs of a company collide with the demands of a more mass market – e.g., those enrolled in a Georgia Tech MOOC course on controlling robots. That Georgia Tech MOOC is one of the reasons that BeagleBone has been grappling with a production capacity problem, imagine wrapping your product around a product that all of a sudden can’t be bought!

I am by no means a BeagleBone basher. I’m all for anything that gets engineering students and tinkerers building stuff. I’m really happy that BeagleBone has contributed to the success of the AM335x for TI. I also like it that so many folks are going hands-on, playing with the hardware and not just fooling around with designing UI’s. (Not that I’m bashing UI designers here, either.)

And I give BeagleBone plenty of credit for being so candid about the problems they’ve been having.

What’s interesting here is seeing the serious challenges that come with ramping up to support a production environment – logistics management, lifecycle product support, availability, product stability, and,  maybe the most important one of all: quality. Gearing up for a less than mass market may not be as simple a matter as one might think.

 

Optimizing embedded software for power efficiency via data flow and memory

This is the third in a series of blog posts based on a series of articles by Rob Oshana and Mark Kraeling on “Optimizing Embedded Software for Power Efficiency” that ran in Embedded.com in May.  In this “episode”, the focus is on “optimizing data flow and memory.” Here’s a quick recap of what’s covered in the article.

The authors point out that “memory-related functionality [with regard to DDR and SRAM memories] can be quite power-hungry.” That’s the not so good news. The good news is that “memory access and data paths can also be optimized to reduce power.”

The article tackles DDR first, and gives a pretty thorough primer on it, including a glossary, schematics, and list of the tasks that a DDR controller takes care of.  It then gets into data flow optimization.

 DDR consumes power in all states, even when the CKE (clock enable — enabling the DDR to perform any operations) is disabled, though this is minimal. One technique to minimize DDR power consumption is made available by some DDR controllers which have a power saving mode that de-asserts the CKE pin — greatly reducing power. In some cases, this is called Dynamic Power Management Mode, which can be enabled via the DDR_SDRAM_CFG[DYN_PWR] register. This feature will de-assert CKE when no memory refreshes or accesses are scheduled. If the DDR memory has self-refresh capabilities, then this power-saving mode can be prolonged as refreshes are not required from the DDR controller.

This power-saving mode does impact performance to some extent, as enabling CKE when a new access is scheduled adds a latency delay. (Source: Embedded.com)

It then gets into using tools to estimate power consumption, identifying those operations where the power draw is greatest – operations where software engineers need to focus their attention. There are a couple of specific optimization tips – by timing, with interleaving, with software data organization, with general DDR configuration, for DDR burst accesses.

They then get into SRAM-related optimization, with discussions of cache data flow optimization, RAM and code size, power consumption and parallelization, and a number of other topics.

As I’ve noted in my summaries of the other Oshana-Kraeling articles, they’re really worth going through fully, as they provide a pretty good mini-course.

___________________________________________________________________________________

The full series of articles is linked here:

Optimizing embedded software for power efficiency: Part 1 – Measuring power

Optimizing embedded software for power efficiency: Part 2 – Minimizing hardware power

Optimizing embedded software for power efficiency: Part 3 – Optimizing data flow and memory

Optimizing embedded software for power efficiency: Part 4 – Peripheral and algorithmic optimization
They are all excerpted from Oshana’s and Kaeling’s book, Software Engineering for Embedded Systems.

All Fired Up

Although I’m not going to run out and buy one – I’m happy (appy?) enough with my LG G2 (I pitched my iPhone 5 two weeks ago)  – like everyone else who loves technology, I was certainly curious about Amazon’s entry into the smartphone market.  (And, admittedly, I wouldn’t turn it away if someone left a Fire Phone on my doorstep. It looks like there are a lot of goodies in there, especially on the camera side.)

Wired has as good a write-up as any other one I’ve seen, so I’ll draw on that one here.

A key part of this allure are the four cameras tucked into the front corners of the phone’s 4.7-inch screen…Using the camera’s face-tracking input, you can look around onscreen objects, even peer behind them. It’s not about popping-out-of-the-screen 3-D, but about infusing a sense of depth and realism into a bunch of flat pixels. Your phone becomes a little diorama box, with stunning effects for 3-D maps, games, and homescreen wallpaper.

That dynamic perspective is also meant to make using the Fire Phone with one hand a lot easier. You can tilt the phone to scroll through news articles or books, as well as navigate through screens. (Source: Wired)

What seems to be the most important capability that the cameras provide is the support they provide for the phone’s Firefly scanner feature. Firefly lets you point at an object, a UPC or QR code, etc. and “create a queue of things to identify, save on the device, or buy on Amazon.”

I suspect that this convenient “buy on Amazon” attribute is the main reason that Jeff Bezos was so keen on developing a smart phone of his own. Not that anyone needs Firefly to figure out how to buy stuff on Amazon…

Naturally, I was interested in the Fire Phone’s specs.

The Fire Phone is an Android device, which uses Amazon’s heavily modified version of the Android OS. (As a recent convert from the iPhon, I have found I do appreciate Android than iOS.) The system-on-chip is a 2.2GHz Qualcomm Snapdragon 800 with 2GB RAM, so this is a plenty-powerful device.

…It also has a 2,400 mAh battery that Amazon says should last “all day,” and a 13-megapixel main camera with optical stabilization and an F2.0 lens.

The question that everyone seems to be asking is whether the droves of developers who’ve created apps for the iPhone and for standard Android smartphones will be just as enthusiastic about creating apps for the Fire Phone.

The Fire Phone will be released in late July.

As I said, I won’t be trading my LG G2 phone in anytime soon, and I’m really not all that interested in a smartphone which has shopping facilitation as one of its prime selling features. But the camera aspects are very interesting, and if someone were to put a Fire Phone in my hands, I wouldn’t throw it back.

Technology Down on the Farm

As anyone who knows me – or reads this blog – can tell you, I’m extremely interested in innovative uses of technology, whether it’s things I use directly – like the Nest thermostat – or things I may need to take advantage of someday – like sensor-based medical technology – or things that only peripherally impact my life – like the automated milking tech I posted on recently.

My automated milking post aside, I’m really not a farm boy.  Nonetheless, a recent article in The Economist on agricultural technology caught my interest.

The article focused on Monsanto’s “prescriptive-planting system”:

…FieldScripts, had its first trials last year and is now on sale in four American states. Its story begins in 2006 with a Silicon Valley startup, the Climate Corporation. Set up by two former Google employees, it used remote sensing and other cartographic techniques to map every field in America (all 25m of them) and superimpose on that all the climate information that it could find. By 2010 its database contained 150 billion soil observations and 10 trillion weather-simulation points.  (Source: The Economist)

I recall reading about the Climate Corporation, and their plans to use all that big data to sell crop insurance. But once Monsanto acquired the company last year, the plan became to combine Monsanto’s huge store of data on the yields of their hundreds of thousands of seeds.

FieldScripts uses all these data to run machines made by Precision Planting, a company Monsanto bought in 2012, which makes seed drills and other devices pulled along behind tractors. Planters have changed radically since they were simple boxes that pushed seeds into the soil at fixed intervals. Some now steer themselves using GPS. Monsanto’s, loaded with data, can plant a field with different varieties at different depths and spacings, varying all this according to the weather. It is as if a farmer can know each of his plants by name.

Other companies are getting into the act: DuPont is working with John Deere, among others, and the initial data are showing that crop yields increase dramatically for those using the system.

Some farmers, however, are resisting. They’re concerned that the data could be misused (e.g., leaked/sold to rival farmers). They also fear that the science will take some of the art out of farming, and that farmers will no longer be calling on their “core competency”: the years of experience and feel for the land that inform their decision-making.

To some (small) degree, I can sympathize with the concern that the machines are taking over.  For example, I enjoy the skill that goes into driving, and I’m not sure that I’m enamored of the prospect of self-driving cars. But I also know that having access to smart technology opens up opportunities to discover new things. In the good old days, all I could do with my thermostat was turn it up or down. Now, I can program my Nest to take into account whether anyone’s at home; I can handle a sudden temperature swing remotely; and do a lot of other cool and interesting things.

I think that the farmers will find the same thing: using technology will make them better (and maybe even smarter) farmers.

“Optimizing Embedded Software for Power Efficiency”, Take Two

This is the second in a series of blog posts based on a series of articles by Rob Oshana and Mark Kraeling on “Optimizing Embedded Software for Power Efficiency” that ran in Embedded.com in May.

Their second article focused on optimizing “clock control and power features provided in the microprocessor peripheral circuits,” and was primarily oriented towards our friend the DSP, as used in two different types of apps with different power-consumption profiles: an MP3 player and a cell phone.

For both of these power profiles, software-enabled low-power modes (modes/features/ controls) are used to save power, and the question for the programmer is how to use them efficiently… The most common modes available consist of power gating, clock gating, voltage scaling, and clock scaling. (Source: Embedded.com)

Power gating “uses a current switch to cut off a circuit from its power supply rails during standby mode, to eliminate static leakage when the circuit is not in use.” Power gating can sometimes to be used to save power by shutting off unused peripherals.

Clock gating:

…shuts down clocks to a circuit or portion of a clock tree in a device. As dynamic power is consumed during state change triggered by clock toggling…clock gating enables the programmer to cut dynamic power through the use of a single (or a few) instructions. Clocking of a processor core like a DSP is generally separated into trees stemming from a main clock PLL into various clock domains as required by design for core, memories, and peripherals, and DSPs generally enable levels of clock gating in order to customize a power-saving solution.

The article then goes into detail on low-power modes in a couple of different DSPs, the Freescale MCS815x and the Texas Instruments C6000. (We use C6000 DSP in our MityDSP-L138 SoM, which features TI’s OMAP-L138.)

Next, the article gets into clock and voltage control/scaling. One method of voltage scaling is using a voltage regulator module to monitor and update voltage ID parameters.  There’s a lot of detail on the pros and cons of clock scaling.

Admittedly, I’m not doing justice to the information – including very detailed examples – that’s provided in these articles. Just trying to give a sense of what’s covered – and to encourage those who want a quick course in taking a software approach to power efficiency to give them all a full read.
—————————————————————————————————————–

The full series of articles is linked here:

Optimizing embedded software for power efficiency: Part 1 – Measuring power

Optimizing embedded software for power efficiency: Part 2 – Minimizing hardware power

Optimizing embedded software for power efficiency: Part 3 – Optimizing data flow and memory

Optimizing embedded software for power efficiency: Part 4 – Peripheral and algorithmic optimization

They are all excerpted from Oshana’s and Kaeling’s book, Software Engineering for Embedded Systems.

 

Measuring power consumption: establishing the baseline on which to evaluate code optimization

In May, Embedded.com ran a series of articles by Rob Oshana of Freescale Semiconductor and Mark Kraeling of GE on managing embedded software design’s power requirements. Taken as a whole, the articles are a mini-course in themselves, but over the next month or so, I’ll be providing a short synopsis of each of them.

I have to say, in terms of piquing my interest, the authors had me at hello. (Or, at any rate, at the intro.)

One of the most important considerations in the product lifecycle of an embedded project is to understand and optimize the power consumption of the device. Power consumption is highly visible for hand-held devices which require battery power to be able to guarantee certain minimum usage/idle times between recharging. Other embedded applications, such as medical equipment, test, measurement, media, and wireless base stations, are very sensitive to power as well — due to the need to manage the heat dissipation of increasingly powerful processors, power supply cost, and energy consumption cost — so the fact is that power consumption cannot be overlooked. (Source:  Embedded.com)

The point of the series is then introduced, which is that, although we generally think of power optimization as being the responsibility of hardware engineering, there’s plenty that the software side of the house can do, too.

The focus of the first article is on the importance of measuring power. It starts out with an explanation of the basics of power consumption, and how you need to factor in the application, the frequency, the power consumption, and the process technology. It points out that there’s nothing that software can do about static power consumption, but that it can have an impact on dynamic power consumption, by controlling clocks (which are responsible for consuming the majority of dynamic device power). In order to establish a foundation using software to optimize power efficiency, you first need to go about measuring power.

There are a number of ways to measure power, starting with the old fashioned way: using an ammeter (kind of the slide rule of power measurement) on the core power supply. There are other methods available, as well:

 …some embedded processors provide internal measurement capabilities; processor manufacturers may also provide “power calculators” which give some power information; there are a number of power supply controller ICs which provide different forms of power measurement capabilities; some power supply controllers called VRMs (voltage regulator modules) have these capabilities internal to them to be read over peripheral interfaces.

Once you’ve profiled your app’s power consumption for each section of code, you’ll have the baseline on which to determine how effective your code optimization is.

I’m not going to get into the details here, but the articles are definitely worth reading if you’re on the embedded software side of the house.

———————————————————————————————————-

Just in case you’re looking for some light reading for an early summer’s day, here are the links to all four articles in the series:

Optimizing embedded software for power efficiency: Part 1 – Measuring power

Optimizing embedded software for power efficiency: Part 2 – Minimizing hardware power

Optimizing embedded software for power efficiency: Part 3 – Optimizing data flow and memory

Optimizing embedded software for power efficiency: Part 4 – Peripheral and algorithmic optimization

They are all excerpted from Oshana’s and Kaeling’s book, Software Engineering for Embedded Systems.

Again, I will be doing short posts on the other three articles over the next month or so. (Part 2 will be posted next week.)