Skip to main content

One step at a time: how Jawbone counts steps

For my fitness tracker, I have a Jawbone UP3, which bills itself as “the most advanced tracker you can buy.” I like it, but I’m still waiting for some of the more advanced functions that they promised to implement with it when they launched it back at the end of last year.

Anyway, I was looking at Jawbone’s blog, and they have a couple of recent posts on step-counting, which is one of the core features of pretty much all fitness trackers.

If you don’t give it much thought, step counting should be pretty simple. Foot up, foot down. One step. How difficult can that be to figure out? But we also know that fitness trackers sometimes come up with information on how many steps have been taken that are wildly inaccurate. And if you do give it some thought, it’s easy to understand why.

Not only do people come in a wide range of heights and weights, and have a wide range of strides, but, as Jeremiah Robison wrote in “Making Step Counting Smarter”:

“…people are surprisingly varied and unique in the way they move. Some walk in sneakers, others in high heels. Some people swing their arms when they walk, others look like they’re carrying 2 suitcases wherever they go. There are commuters who walk, drivers who stroll, parents with baby carriages, and folks who jump, skip, or pedal their bike.”  (Source: Jawbone blog)

Jawbone’s approach factors this in, and they’re constantly refining their product. They have a machine learning system, and they do testing on subjects that cover different heights and weights, and are doing their stepping under varied surfaces and conditions. They recently put a fix in to accommodate low-weight individuals, whose steps were being undercounted.

In a more technical post of the subject, Stuart Crawford gets into more detail on Jawbone’s machine learning-based step classifier:

“In order to train the step classifier we provide the learning algorithm with enormous numbers of ‘labeled examples’.  A single example is simply a short snippet of accelerometer data, with a label indicating whether that snippet corresponds to one, two, or three steps.  Features of the accelerometer stream are then defined to describe each snippet.” (Source: Jawbone blog)

The full post is definitely worth a look, especially for someone like me. Machine Learning used to be called Artificial Intelligence (AI). Apparently those words spooked too many people, so it’s no longer as widely used a term. I studied AI at UMASS for my Master’s degree. At the time I studied what are called neural networks, basically simulations of the neurons in your brain. Each simulated neuron has a set of coefficients that get updated as the machine learns, similar to how the real brain functions.

And similar to what Jawbone’s doing with the foot, one step at a time.

Embedded Barbie

My daughters are well beyond Barbie age, but a recent article in the NY Times (“Barbie Wants to Get to Know Your Child”) still managed to catch my eye. Barbie, it seems, is getting smart – at least smarter than she was twenty years back when she could utter a few lines like, “Math class is tough!” Thanks to AI, speech recognition, and embedded technology, Hello Barbie – which is the name Mattel has given her – will soon be intelligent, albeit artificially so.

What’s inside Barbie is, among other things, a rechargeable battery in each thigh. In fact, they had to widen the thighs a bit to make room for the batteries. There’s also a “mini­USB charging port…tucked into the small of her back.” And, as the article’s author found when he got a look inside Mattel while the new Barbie was being developed:

“A microphone, concealed inside Barbie’s necklace, could be activated only when a user pushed and held down her belt buckle. Each time, whatever someone said to Barbie would be recorded and transmitted via Wi­Fi to the computer servers of ToyTalk [an AI company]. Speech­recognition software would then convert the audio signal into a text file, which would be analyzed. The correct response would be chosen from thousands of lines scripted by ToyTalk and Mattel writers and pushed to Hello Barbie for playback — all in less than a second.” (Source: NY Times)

Apparently, given the form factor that the toy designers were working with, most of what’s on the inside had to be custom built. Critical Link was not asked for any help. It doesn’t sounds like they’re actually doing much of any processing within Barbie herself, just doing some recording and transmitting. If they do want to make Barbie any smarter, we can create a System on Module that’s pretty darned small. (I’m trying to remember just how big Barbie is. I’m sure I stepped on more than one back in the day.)

What’s happening on the processing end is occurring on Amazon’s S3, the platform that ToyTalk uses.

“The company’s AI and speech technology are written in C++ with iOS and Android clients built on top. Its desktop authoring software is written in Python and PyQt. The server code is written in Go, and ToyTalk provides a RESTful Web API to build Web-based conversational clients.” (Source: Tech World News)

Even though Hello Barbie isn’t expected to be released until the holiday season, it’s already inspiring some pushback. Much of it is around the potential for the recording technology to be misused, given its potential for invading the privacy of the children holding conversations with their dolls. (Which, of course, they’ve done for years. It’s just that most dolls haven’t had all that much to say back.) Pretty much a 21st century problem, I’d say.

In any case, as I mentioned, I’m out of the Barbie fray. Just thought that coming up with a Siri or Cortana for Barbie-lovers is an interesting use of technology.

Table Tracking Technology Comes to Panera

Over the summer, I took my kids out to a weekend breakfast at a local Panera. If you’ve never been to a Panera, they’ve always (in my experience) had a system where, after you placed your order, they handed you a buzzer that lit up, buzzed, and vibrated when your order was ready. You then went up to the counter and picked it up.

Last year, they began introducing table service. Rather than having to fetch your own order, they’re now rolling out technology that enables servers to bring your order to your table.

They’re deploying an LRS Table Tracker system that includes:

  • The trackers that customers place on their table. These broadcast your location to the gateway.
  • A gateway that connects the trackers to the overall Table Tracker system.
  • A mobile app that employees use.
  • Tags that are placed on each table.

Critical Link, while not part of the LRS solution, does have clients that use our SOMs to implement base/control stations that do similar things.

Anyway, my kids thought the Table Tracker was pretty cool, and I could use it to explain to them a bit about what mom’s company does.

And speaking of my kids, here they are, waiting for our table to get tracked.

Panera 2The Panera we were in also had new order kiosks installed that will let you place your own order, rather than give it to an actual human. They weren’t yet active at the store we were at, but, honestly, I’m not sure how I feel about this. One of the things I like most about eating out is that I don’t have to do the work! Granted, it isn’t like Panera is asking me to prepare my own food (though honestly even if they were, I would still cherish the fact that I didn’t have to get the groceries…) But I’m still taking on more responsibility than I’d like. But I guess that’s exactly the point, as this article notes that a shocking one in seven orders are wrong in the food service business.

The bottom line is that we’ll probably be seeing a lot more of these automated system in “fast casual” restaurants like Panera. Seems, alas, like they’re more efficient that us humans.

 

ARM and the Internet of Things

From Critical Link’s point of view, one of the most exciting things about the Internet of Things (IoT) is that it’s pretty much synonymous with embedded technology. And from Critical Link’s point of view, we’re always interested in reading about ARM technology, as well, as ARM is incorporated in a number of our System on Modules. Thus I was doubly happy to come across a couple of articles by David Blaza on the topic of ARM and the IoT, which were published on the ARM Connected Community.

In the first, David addressed security in the IoT. As most are aware, when it comes to the IoT, security is something of the elephant in the room. Many applications, especially on the consumer side, are being rushed to market without the type of robust security solution in place that industrial applications get. And, of course, since they’re connected to the Internet, any embedded apps that are part of the IoT have inherent vulnerabilities that an app that operates within the walls don’t have. A recent ARM trend will help shore up IoT security.

“…the trend is many new ARM based multicore boards coming to market that for the first time can run multiple operating systems simultaneously.  By running multiple operating systems the data coming from say real time sensors can be completely isolated from intrusion and be encrypted and sent to the cloud securely.  At the same time another operating system (like Android or Ubuntu) can manage the user interface and network connections. ” (Source: ARM Connected Community – April 27, 2015)

David had more to say about ARM’s multiple OS capabilities in a follow on post.

“Any good embedded software developer has to think about the classic tradeoff of system performance versus reliability/up time which if you are running a single OS it can be challenging and limiting.  With ARM multicore boards being available for under $100 now SMP and AMP are within reach of every embedded developer.  The other trend that comes into play here is that increasingly embedded systems (and their IoT progeny) need graphical user interfaces (GUI) and access to multiple communications networks.  So in many embedded designs running separate operating systems to handle different system functions solves many of the classic tradeoff problems and now it’s never been easier.” (Source: ARM Connected Community – May 1, 2015)

In this post, David gives a shout-out to Express Logic, and their ThreadX RTOS, which supports our MityDSP-L138 board, a SoM combining a TI C674x DSP, an ARM9 processor, and an optional FPGA.

Critical Link’s first multicore product features the dual ARM Cortex-A9 Altera Cyclone V SoC. A number of our customers have inquired about AMP for just the reasons David points out in his follow-up post.the OS that excels at data acquisition and processing isn’t necessarily the best choice for supporting external user and system interfaces.  However there are challenges that make AMP difficult to implement in many multicore architectures, particularly around inter-processor communication, cache and shared resource management. We have not yet worked with an AMP solution on our Cyclone V SoC SOM, but I’m sure we will soon!

A new twist on sensors

A week or so back, I saw in the news that the U.S. Department of Defense is funding the FlexTech Alliance, a consortium of 162 academic institutions, researchers, and that will focus on “flexible hybrid electronics, which can be embedded with sensors and stretched, twisted and bent to fit aircraft or other platform where they will be used….’This is an emerging technology that takes advanced flexible materials for circuits, communications, sensors and power and combines them with thinned silicon chips to ultimately produce the next generation of electronic products,’ [Defense Secretary Ash] Carter said.” (Source: Reuters via Yahoo)

While we’re not part of FlexTech initiative, Critical Link does do work on defense applications, and we’re always interested in what’s happening in the world of sensors.

The new – and pretty revolutionary – approach to electronics and sensors packaging relies on emerging techniques for printing on “flexible, stretchable substrates” in the high-precision printing industry. The sensors that will be produced, using ultra-thin silicon components will be light weight, and bend and stretch. This makes them ideal for wearables, and the applications range from military gear worn by soldiers to medical devices to consumer products.

According to the Reuters article, “the technology also could ultimately be used to integrate sensors directly onto the surfaces of ships or warplanes, allowing real-time monitoring of their structural integrity.”

Anyway, it’s always exciting to learn about new and highly-useful new technologies.

Somewhere along the line, in some show or another on the industrial revolution, I heard it said that someone born in the generation before the steam engine was invented had more in common with someone who lived in Roman times that they did with their own children and grandchildren. At the pace at which technology is evolving, this may end up being true of my generation. Technologically speaking, we’ll have more in common with Civil War vets than we do with our own grandkids.

 

 

To boldly go where no man* has gone before

Somewhere along the line, I bookmarked an article on a robot used to study volcanoes. I finally got around to reading it, and thought it would make for an interesting blog post. So here goes.

NASA’s Jet Propulsion Laboratory (JPL) is using 3D vision-enabled robots that are dropped into volcanoes so that scientists can get a better understanding of how and why volcanoes erupt.

“The first iteration of the project, VolcanoBot1, was tested at Kilauea volcano in Hawaii in May of 2014. The robot, which is equipped with a 3D sensor that looks similar to a Microsoft Kinect, was able to descend to depths of 82 ft. in two locations on the volcano’s fissure. Using VolcanoBot, the researchers were able to create a 3D map of the fissure. During the project, the VolcanoBot 1 also discovered that the fissure does not pinch shut at the bottom, so the team decided to build another version of the robot in order to go even deeper and investigate further.” (Source: Vision Systems)

VolcanoBot 2 is a smaller and lighter version, and also comes with an “electrical connection that is more secure and robust so that researchers can use the 3D sensor’s live video feed to navigate.” The visualization sensors in VolcanoBot 2 can also be rotated. The new version is supposed to go into operatCarolyn-Parcheta-PICS01ion sometime this year. The original date was supposed to be March, but I couldn’t find any evidence that this has as yet happened.  (This picture of JPL scientist Carolyn Parcheta, whose project this is, will give you a good sense of just how small the Volcano Bot 2 is. No wonder it can boldly go into volcano fissures where no man (or woman) has gone before.”

I’m a bit disappointed that I couldn’t find much technical information on the Volcano Bots. If nothing else, I’d like to have a bit more detail on what sensors were being used. (I’m not that familiar with 3-D sensors, but did find an article on them in PC Magazine.)

Anyway, Volcanists at NASA – I almost wrote vulcanists!  – are testing their robots out on earth, with hopes of being able to use them to explore volcano sites on Mars and the moon.

 

 

*There’s a Google limit. The real title is “To boldly go where no man (or woman) has gone before

The impact of robots on development

Regular followers of the Critical Link blog will know that I’ve been interested in robots since my grad school days at UMass, when I got to work with a robot named Harvey. So, naturally, I enjoyed a recent post by Altera’s Ron Wilson on how robots are taking over. He’s not really worried about robots seizing control and aiming for world domination – at least not yet. No, Ron’s focus is on how robots are influencing development, and how they can impact both your hardware budget and your schedule.

“The issue is that concepts—and with them, design requirements—from the world of robotics are filtering into other kinds of embedded systems. The infiltration seems to follow an identifiable sequence. First, low-cost sensors, often from smart-phone technology, and actuators, such as servo motors from radio-controlled (RC) models, increase the number and complexity of control loops in new designs. Then demands for increasing autonomy gradually pass control of the system from human operators to the system itself: first automating sequences of related actions, then shifting the human-machine interface from actions to goals, then moving toward full autonomy. Think, for example, of a car evolving from a manual to a fully-automatic transmission, to an automated driver-assist system, and on to full self-driving capability. Embedded systems are becoming robots.”

“Obviously, as you move along this path the computing load on the system increases. But by how much? In what algorithms? And how do you provision for these new computing loads?” (Source: Altera)

Ron walks, in considerable detail, through an example of a toy robot: a six-legged apparatus, or hexapod, that resembles a drone. The legs can rotate and bend at their two leg joints. While the example is pretty simple,

“It also conceals a remarkable complexity, which will provide us a rich example of how computing intensity scales with seemingly reasonable changes in requirements.”

The requirements will include the ability to calculate the angles that will let the hexapod take a walk, and I/O to support multiple pulse trains. Factor in that, in real life, the hexapod will need to navigate uneven terrains and step over things, which will add to the computing load and require the ability to construct your own 3D map of the terrain. Enter cameras and machine-vision algorithms.

And so on…More power. A “wireless link to offload much of the computation.”

Ron’s point is that, as robotics become more prevalent in applications, complexity will increase, as well. New architectural models will be required, and assumptions of the past will go out the window.  He ends with a warning:

“As robotics infiltrates the rest of the embedded design world, the wise will plan well.”

Well, you can’t argue with that, other than to say that embedded design has always required thoughtful planning.

Anyway, I probably haven’t done justice to Ron’s piece, so I encourage you to go over and read it in its entirety.
———————————————————————————————————————————————-

Just wanted to point out that Altera is a Critical Link technology partner. Our MitySOM-5CSx combines the Altera Cyclone V System on Chip (SoC), memory subsystems and onboard power supplies. This SoM provides a complete and flexible CPU and FPGA infrastructure for highly-integrated embedded systems.

Obsolescence

I recently heard that CVS will be phasing out its one-hour film processing. Walgreen’s is doing the same, if it hasn’t done so already. Let’s face it, most pictures these days are taken on a smartphone. The few and far between ones that get printed out are done on a home printer, or via a service like Snapfish.

This reminded me of a funny story my colleague Amber had told me.

Her daughter – about nine years old at the time – had come across a joke with a punchline about a dark room. She asked her mom what a dark room was. Amber explained that you needed a dark room (literally) to develop film, since light would ruin the pictures. That explanation didn’t help much, as the next question that Amber’s daughter had was “What’s film?”

Film and dark rooms are just a couple of the things that “the kids” won’t get.

Amber’s daughter spotted another one recently, when she asked her mom why the “Save” button for files looks the way it does. I hadn’t thought of it – it’s just the “Save” icon – but it’s a 3.5” floppy disk. Which really wasn’t so floppy. The “real” floppies were 5.25”. Whatever the size, when was the last time you saw one of those?

Another funny obsolescence story came from a friend whose niece wanted to know how folks used to text on an old rotary dial phone.

If you played a party game and went around naming things that are obsolete over just the last couple of decades, it would probably take a long time to exhaust the list.

But in thinking about obsolescence, it’s interesting that a lot of the underlying processing technology that contributes to so many products becoming obsolete doesn’t in itself become obsolete.

DSP, FPGA, ARM.

Certainly, the processing gets faster and more powerful, but the fundamentals stay pretty darned consistent.

Just something to mull over on a hot summer’s day.

 

 

 

Traffic Signal Pre-emption Systems

A few weeks ago, there was a tragic story in the news about an EMT who was killed when the ambulance in which she was riding – on a call – was broadsided at an intersection. The story was all the sadder because the EMT was so young – she was just 22 – and was working her last shift before heading off to graduate school.

As of this writing, all the details aren’t in, but however it turns out, when emergency vehicles – fire trucks, police cars, ambulances – are on the move, the circumstances are often dangerous.

Many traffic signals are equipped with sensors that enable them to detect the flashing strobe lights on emergency vehicles. (By the way, in case you’re thinking of trying to game the system by playing around with your high beams, the sensors respond to lights that are flashing at a very high rate, and some traffic lights work only when there’s a specific pattern to the flashing. Overall, traffic signal pre-emption systems are pretty sophisticated, and, in any case, it would be illegal for unauthorized people to have one.)

Anyway, there’s now a new, smarter system that also keys off turn signals, and changes the lights ahead based on the turn direction, rather than just what’s directly in the path of the emergency vehicle. The system is tied into the turn signal so it now knows if the emergency vehicle is making a turn at a light, and, if so, which way it’s going. It then aligns the lights along the cross streets in the direction the vehicle’s heading in to clear the path. This gives any “civilian” cars and trucks that are there a chance to clear out, because they’ll have a green light. And when the emergency vehicle gets there, it will have a clear shot.

GTT’s Opticom is one system that handles the turn signals. There’s a pretty good, high level video of how their system, which combines GPS and Infrared technology, works here. (You may want to skip the intro about why it’s important, and get to the “how it works” stuff, which starts about three minutes in.)

If it makes things safer for emergency vehicles (and those along their way), and gets those emergency vehicles to those in need faster, I’m all for it.

Two New CMOS Consortia Worth Mentioning

There’s some interesting news on the CMOS front that I wanted to share with you.

This past May, in Europe, two different consortia began work on CMOS image sensor projects.

One is the CISTERN project. CISTERN – which is more or less an acronym for CMOS Image Sensors TEchnologies’ Readiness for Next generation applications – is focused on technology for broadcast and entertainment, high-end security, and multispectral imaging, among other industries. The goals for CISTERN include:

  • Develop CMOS image sensors with improved performance (spatial resolution, temporal resolution, higher bit depths, lower noise, etc.)
  • Develop real-time image processing techniques needed to improve the quality of the digital output signal of the sensor demonstrators.
  • Develop and demonstrate the capability to produce multispectral imagers by hybridization of multispectral filter arrays on top of CMOS sensor. Both matrix filters and hybrid assembly process will be developed within the project.
  • Demonstrate the improved performance of the CMOS imagers combined with related processing in a number of demonstrators.
  • Develop ultra-high resolution, widely-opened sensor-adapted zoom lenses (2/3” format, 4K resolution), for broadcast and security applications/markets(Source: Vision Systems)

The other research project is EXIST. Again, this is a more-or-less acronym for Extended Imaging Sensor Technologies. EXIST’s focus is on:

“…systems designed to improve  security, safety, and healthcare. The image sensor research will focus on enhancing and extending the capabilities of current CMOS imaging devices for better performance including sensitivity, dynamic range, quantum efficiency, and more. Key developments in the project will be improvements in hyperspectral and multispectral capabilities.

Objectives include enhancing and extending the capabilities of current CMOS imaging devices:”

  • New design (architectures) and process technology (e.g. 3D stacking) for better pixels (lower noise, higher dynamic range, higher quantum efficiency, new functionality in the pixel) and more pixels at higher speed (higher spatial and temporal resolutions, higher bit depth), time-of-flight pixels, local (on-chip) processing
  • Extended sensitivity and functionality of the pixels: extension into infrared, filters for hyperspectral and multispectral imaging, better colour filters for a wider colour gamut, and FabryPérot Interference cells
  • Increasing the optical, analog and data imaging pipelines to enable high frame rates, better memory management, etc.(Source: Image Sensors World)

One of the EXIST partners is CMOSIS, a company that Critical Link works with. Our MityCAM-C8000 is based on a CMOSIS 8MP high-speed, global shutter CMOS sensor. This MityCAM couples an 8MP CMV8000 imaging sensor from CMOSIS with the processing technology in Critical Link’s Altera Cyclone V SoC System on Module.  Production-ready configurations include fully-enclosed cameras, complete 3-board sets, and partial board sets for customers who elect for custom sensor or I/O boards. Applications that the MityCAM-C8000 is designed for include machine vision, motion control, traffic monitoring and management, security and surveillance, medical, and embedded instrumentation.

Both CISTERN and EXIST are expected to complete their work in 2018. I’ll be looking forward to seeing what they come up with.