Skip to main content

Could a rolling robot be rolling your way soon?

A couple of months ago, in posting about neural networks, I wrote a bit about my experience with robotics when I was in grad school at UMass back in the day. Robotics have gotten a lot more sophisticated since then, as I was recently reminded when I came across an article, “The Rolling Robot Will Connect You Now”, in The New York Times.

The article noted that, with the cost of remote-controlled robots  – a.k.a., telepresence machines – going down (some of the lower-end consumer-oriented ones can now be had for something in the $1 – 2K range), these robots would start being used more widely. They:

… could serve as a conduit for virtual visits from family and friends to help older people live at home longer. Traveling business people could use them to show their faces — by way of a screen on the rolling robot — to colleagues at central headquarters, or to read bedtime stories to their children from afar.

Ethicists have weighed in, suggesting that, for example, adult children with elderly parents might just check up on them remotely, rather than hop in the car for a “real” (and more meaningful and personal) visit.

Those possibilities aside, there are a number of important and interesting areas where these telepresence machines can be deployed. Think of telemedicine. And about kids who are ill and stuck at home for long periods, and who, with a robot, could stay more closely involved with their classmates and teachers. It’s also fun to think of the business uses, which is especially interesting to me now that I’m spending a lot more time on the road than I used to.

As an avid Evernote user, I remember reading a while back that Evernote’s CEO Phil Libin helped make sure that the folks who worked in a facility remote from their Mountain View, California, headquarters felt like they were a real part of the company.

Don’t worry, Critical Link employees, I’m not going to be setting this up anytime soon. But it’s kind of fun to think about it!

 

 

 

DSP: making a comeback in mobile apps

Well, last week I blogged about ARM processors ruling the world, especially from a consumer product perspective.

But not so fast, maybe.

EE Times had a recent article that observed that “there’s no such thing as ‘game over’ for the mobile apps processor battle.” And then talked about the resurgence of our old pal the DSP in the mobile market.

The new differentiators cropping up for mobile devices are the always-on mobile SoC that can be promptly awakened by voice activation, sensor fusion, multi-channel surround-sound audio, eye-tracking, post video processing, and more. An additional wrinkle is that many mobile SoCs are also being pitched as the brain that drives an automaker’s in-vehicle infotainment system.

Such changes are prompting apps processor designers to rethink DSP, GPU, and CPU cores, giving birth to a new generation of “light” apps processors, designed as co-processors to be used in conjunction with a main apps processor. Many designers are also intent on beefing up the performance of their own apps processors, by re-crafting graphics cores (e.g., Nvidia’s Tegra K1) and/or adding more processor cores (e.g., MediaTek’s octa-core apps processor).

Qualcomm is leading the pack here with its Hexagon DSP core, but others are likely to follow.

One key here is SoCs with small MCU’s built-in. We are seeing the semiconductor manufacturers include more and more of these in their silicon – where it used to be just one or two. In some of its products, TI has included programmable real-time units (PRUs) that offload the more power-hungry CPU or DSP for low power modes, in which background activities can be run (e.g., location tracking, heart-rate monitoring). The main DSP (or ARM) can be put to sleep, and get woken up when the PRU determines that it needs to be.

Just for the record, we have a couple of SoMs based on ARM processors – TI’s OMAP-L138 (dual core DSP and ARM) and the AM335x (ARM only), so you may be hearing more from us (and TI) on PRUs.

How ARM-based SoMs have come to the fore

While our first System On Modules (SoMs) were DSP-based, we have introduced several ARM-based SoMs over the last couple of years.

MityDSPs of varying sorts are still used in many of our customers’ applications, but there’s no doubt that the embedded market has been moving towards ARM for a number of years, a trend that really became noticeable (to me anyway) with the availability of ARM7 based devices.

At that point, Intel architectures that had been deployed so successfully in PC’s were not especially practical for many embedded products, other than those that were well suited with PC-104. They required too much power and were too complex for the more straightforward and simpler designs that many of the growing number of embedded products coming to market needed.

As ARM became more common, an entire ecosystem began to build up around it – debug tools and compilers, both open source (GNU) and commercial. The momentum became inescapable – apparently so inescapable that a non-technical business publication like Business Week took notice.  A recent article by Ashlee Vance, “The Unlikely Tale of How ARM Came to Rule the World”, highlights the takeover by ARM, noting that “just about every smartphone, mobile phone, and tablet runs on an ARM core.”

Of course, those consumer products are a far cry from the sorts of applications that our SoMs get embedded in, but ARM is certainly making headway in our world. And it’s entertaining to read Business Week’s view on how ARM took over (and fun to brush up on its history).

We all know the old expression “uneasy lies the head that wears the crown”, so there’s no doubt another up and coming architecture out there that will rule the world at some point. I’m not sure what it is. Intel recently tried to compete with its Atom, but that tended to require more power, and hasn’t yet gone viral like ARM did. (I don’t think it will.)

Any candidates in your mind for what the next world-ruling architecture will be?

 

Zipcar Technology

What with Omar’s post of February 12th, and mine last week, it looks like February has been Car Technology Month here at Critical Link.

We’re really not all that car mad, it’s just that there’s so much interesting technology on the automotive front these days. Some of that interesting technology is brought to us by Zipcar.

For those who aren’t familiar with Zipcar, it’s a car-sharing service – “wheels when you want them” – that lets members make use of a car for a short period of time – a few hours, rather than the full day that you pay for with a traditional rental car. Zipcar members reserve their cars online, specifying the hours they want it for, their location, and their other preferences – Zipcar offers everything from pickup trucks to hybrids to luxury “date night” cars. Zipcar members are issued smart cards, which are used to unlock the car they’ve reserved. The keys are in the car already. Away you go!

Founded in Boston in 2000, Zipcar is now all over the map, and is especially popular in urban and university settings. (No surprise that rental car companies perceived car sharing services as a threat. Last year, Avis acquired Zipcar.)

Anyway, I recently used Zipcar for the first time while in San Francisco, which got me thinking about all the cool technology that goes into the mix.

Unlike technology that helps drivers parallel park or avoid obstacles in the road, Zipcar has to handle a number of requirements that have nothing to do with safety or driver ease of use.(But plenty to do with member ease of use!)

For starters, they need to use RFID for the card reader, which must communicate with the Zipcar home office, presumably via cellular. (The communications technology has to be. While Zipcar uses many outdoor locations, it also houses cars in garages, some of which are underground.)

Zipcar technology also has to tie into each car’s computer for a number of things:

  • Odometer readings – there’s a 180 mile limit per usage, so that needs to be monitored
  • Control of the electric door looks
  • Ability to make the car flash its lights and beep its horn from the app

Gas is included in the hourly rental costs for a Zipcar, but users are expected to fill ‘er up if the fuel gauge goes below one-quarter. But you don’t pay out of pocket for gas. Each Zipcar has a gasoline card. When you go to pump your gas, you’re prompted to enter your Zipcar member number and the current mileage. Don’t know how this one works – there’s no way they worked out a deal with every gas station in the country to get this implemented – but it pretty much does, as there are no restrictions on what gas station you can go to.

They also must have solved the problem of someone topping off their Zipcar and then filling up the car behind it. (Maybe Zipcar members don’t think this way…) It might be there are some built in smarts that monitor mileage vs. gas in the tank, and tie into the car’s estimate of the range left for the vehicle.

Another interesting problem they’ve solved is the ability to tie into all sorts of different makes and models. Since there are no standards for what they do, every time they add a new model to their fleet you would think they must need to engineer the interface to it.

Zipcar was and is a great idea which never could have come about without the Internet of Things.

Driving the Future

Not to be outdone by my friend and colleague Omar Rahim, who posted last week about what’s happening with automotive technology – I am not quite a car buff, but I do have a couple of soft spots when it comes to cars –  I wanted to add my two-cents, based on some work that our partner Texas Instruments has done on “Driving the Future.”

The video linked here is as good as anything I’ve seen on what the future of driving will look like.  Akin to concept cars, this is a concept video of what the automobile experience can be like in the future. I don’t know how many of these ideas will show up in production cars, but it’s sure fun to think about.

It touches on the camera/vision areas that Omar wrote about, and also gets into some of the other very interesting technology that may be in place. Things like biometric security, so that the doors will only open for the right people. Personalization, so that the ride is customized for each individual – temp, seat position, etc. Virtual dashboards, which are very cool.

They also demonstrate sensing technology that could inform you if you’ve left something on the roof. In the video, it was a coffee mug, but I understand that it can be something else.  (Ok, I confess, I once left a customer site with my iPad on the roof of the car. After driving half an hour I realized what may have happened and returned to the scene of the crime to find my iPad destroyed in the roadway!)

Although assisted parking has been around for a while, I particularly liked the demonstration of automatic parallel parking that occurs when the driver is completely out of the car. Not that I want folks to forget how to use this important skill. But there are plenty of people who have to parallel park so rarely that they end up with two wheels on the sidewalk, and the hood of the car halfway into the street.

For those interested in more details on what the future of driving looks like, here’s a link to a paper, “Driving the Future: TI’s Automotive Perspectives 2013.” It’s obviously written from TI’s point of view, but it gives a pretty broad overview of where technology will be taking our cars and driving experience in the not-so-distant future.

 

 

Driverless cars, not so fast; but cars that can see, now we’re talking

Although sometimes when I’m on a long trip, the idea seems pretty appealing, I’m not quite ready for driverless cars. But as a car buff and an electronics engineer, I’m certainly interested in, and often intrigued by, the increasingly smart technology that’s going into automobiles these days.

In this light, I enjoyed an article from The MIT Technology Review, published last October, that I recently came across. The article, “Driverless Cars Are Further Away Than You Think”, described the promise of “autonomous driving” – fewer accidents, fewer traffic-related deaths – but also pointed out that it’s going to be a while before driverless cars are perfected and cost-effective enough to become the norm.

Not surprisingly, given the cost factor, much of the current work being done in this area is on luxury cars.

I especially enjoyed the detail on the types of technology being used. Since one of my main focus areas here at Critical Link is on imaging, I liked reading about the vision technology being deployed.

In its 5 Series, BMW will be using video cameras to track lane markings and read road signs. They’ll also be embedding radar sensors to spot objects in the road ahead, and side laser scanners. Mercedes-Benz is prototyping an “Intelligent Drive Research Vehicle” that will have a stereo-camera that sees objects in 3-D, and other cameras that will recognize traffic lights and read road signs. It will also have an infrared camera for night vision. GM’s Cadillac SRX will house laser sensors, radar, and cameras.

If you’re not familiar with what Critical Link is doing with vision systems, we offer cameras that can be used in many different imaging and vision applications. These cameras combine sensors from a number of different manufacturers with our DSP or ARM-based system-on-modules to take care of processing. (You can read more about our products here.)

Meanwhile, if you’re a little nervous about driverless cars, keep in mind that they probably won’t be on the road for a few more years. And to further ease your mind, the system in the driverless BMW, for one, is “designed to defer to a human driver, giving up control whenever he or she moves the wheel or presses a pedal. And if all else fails, there is a big red button on the dashboard that cuts power to all the car’s computers.”

As long as the big red button works!

 

In case you had any doubts about the Internet of Things, Google acquires Nest

As Critical Link blog readers know, I am a fan of the Nest Thermostat. It’s a nifty gadget, pretty useful, and out and out fun to play around with.

Much as I love my Nest, I have to admit that my jaw dropped a bit when I read last month that Google was acquiring the company for a cool $3.2 billion.

That’s an awful lot of money…

But Nest is the developer of the leading smart thermostat, and I’ve read that Nest thermostat’s are now in a million homes. And they have added to their product portfolio with Protect, a smart smoke and carbon monoxide detector. (Ok, so I installed mine last night – more on that later, probably.) And the company owns a boatload of patents (some admittedly being contested). And it was founded by Tony Fadell and Matt Rogers, who both have strong pedigrees, having worked on the first iPod while at Apple.

So while $3.2 billion may seem like crazy money, it’s not that crazy.

Certainly, from the Nest standpoint, the acquisition makes sense (and dollars):

“Google has the business resources, global scale and platform reach to accelerate Nest growth across hardware, software and services for the home globally. And our company visions are well aligned – we both believe in letting technology do the hard work behind the scenes so people can get on with the things that matter in life. Google is committed to helping Nest make a difference and together, we can help save more energy and keep people safe in their homes.” (Tony Fadell, quoted in TechCrunch.)

On the Google end, this jumpstarts (and surpasses earlier less successful) efforts to get into the connected device world – and gives Google a gateway into the connected home. It will be very interesting to see where Google takes this.

What I especially like about this acquisition is that it confirms what we at Critical Link have been saying all along about the Internet of Things. That IoT is big and getting bigger, important and getting more important, and will sooner rather than later involve pretty much everything electronic, whether in the household, on an individual, or in the scientific and industrial applications that we work with.

Any way you look at it, this is exciting news.

Neural networks: computers are getting really brainy

A few weeks ago, my friend and colleague Omar Rahim sent me a link to an interesting article by John Markoff which he saw in the The New York Times.  The article was on a new chip that’s slated for release this year.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals. (Source: NY Times)

This, of course, has tremendous implications for the types of applications that we provide embedded systems for (and for all of computing, for that matter).

The article talks about a shift from processers that use method Von Neumann architecture to new processors that:

… consist of electronic components that can be connected by wires that mimic biological synapses… They are not “programmed.” Rather the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” That generates a signal that travels to other components and, in reaction, changes the neural network, in essence programming the next actions much the same way that information alters human thoughts and actions.

It’s way cool that this technology is being put into hardware. The speed there will be important. Typical computer programming is precise — the machine does exactly what it is told. With neural networks and this learning, you really don’t know what “decision” the computer is going to make. It’s still (at a very low level) performing precise operations, but it’s storing data in a distributed fashion that impacts the decisions it makes in the future.  These are the coefficients they talk about in the article. As the coefficients take shape over multiple experiences, the system learns “right” from “wrong” in a fuzzy world. 

Anyway, this is a really interesting article on the future of computing, and especially interesting to me because I studied neural networks when I was in grad school at UMass. While I was there, I worked in the computer vision lab where we had an autonomous robot we were working on making navigate the real world completely on its own.  The robot’s name was “Harvey.”

I was able to dig up a video taken from Harvey’s perspective. I have no idea when this was taken, but that’s Harvey moving toward the Graduate Resource Center, the computer science building at UMass.

Brings back a lot of memories … 

Calculators Are Getting Smarter, Too

I’m too young to have used a slide rule, but I sure remember when calculators were big and clunky. And, while they did save time over hand calculations or the slide rule. They didn’t do all that much.

Hands down the best calculator I’ve ever had was the HP11C RPN calculator that my parents gave me for Christmas my senior year in high school. This is still a great calculator today!

Like just about everything, calculators are getting smaller, less clunky, and a whole lot smarter.

A good example?

TI’s Nspire, a calculator platform which Chris Grachanen blogged about a few weeks back on EDN Network.

It can connect to a variety of measurement sensors. It controls measurement sensor ranges, acquires measurement readings, and provides both data analysis and data archiving through its Lab Cradle accessory.

The TI-Nspire Lab Cradle has five sensor ports and collects measurement data at a rate of up to 100,000 samples per second. (Source: Chris Grachanen on EDN)

100k samples per second sounds like a lot, and it is. But many laboratory instruments, software defined radios, etc. – the sorts of applications that Critical Link gets involved with – sample at 1 to 50Msps, or more, and must provide low noise, high ENOB signal.This is a point that Chris covers:

These measurement sensors aren’t designed for stringent measurement applications where low measurement uncertainty is a prime consideration. However, they are sufficient for monitoring and evaluating many physical phenomena. They are ideal for evaluating measurement technologies before investing lots of money in more commercial or metrology-grade instrumentation.

Whether it can work with “commercial and metrology grade instrumentation” or not, it’s cool that the Nspire can give students a way to measure the world and interact with data. This is something that we didn’t get to do back in the day when we were working our clunky old calculators, or even my nifty HP11C. We didn’t get to do things like this until we were deep in embedded system design on a multi-million dollar project (back then) out in the real world.  (My first experience with this sort of data acquisition was when I was working on a ground-based radar system.)

Amazing how smart these calculators are getting.

This has always bugged me

Omar Rahim, here, with my first blog post.

When I thought about starting to do some occasional posting, I figured I’d be writing about embedded vision. But then I happened on this article, and thought I’d start here.  Starting with this topic actually makes some sense – at least to me – as I’ve always been curious about how the term “bug” came into use to refer to a hardware or software glitch.

James Huggins starts out his explanation by relating a funny story about an early computer at Harvard University way back in the day:

On the 9th of September, 1947, when the machine was experiencing problems, an investigation showed that there was a moth trapped between the point of Relay #70, in Panel F.

The operators removed the moth and affixed it to the log…The entry reads: “First actual case of bug being found.”

The word went out that they had “debugged” the machine and the term “debugging a computer program” was born. (Source: James Huggins.com)

These guys had actually found a real bug, but that doesn’t tell us how the word came into use to begin with. And it had been in use for a while. During WWII, it referred to radar problems. In the late nineteenth century, as electricity came into widespread use, “bugs” were sometimes used to define bad connections. Before that time, it was used in telegraphy:

There were the older “manual” keyers that required the operator to code the dots and dashes. And there were the newer, semi-automatic keyers that would send a string of dots automatically. These semi-automatic keyers were called “bugs”. One of the most common brands of these keyers, the Vibroplex, used (and still does use) a graphic of a beetle.

These semi-automatic “bugs” were very useful, but required both skill and experience to use. If you were not experienced, using such a “bug” would mean garbled Morse Code.

No guarantee that all this is 100% factual. (I’m an engineer, so of course I do worry about these things.) But it is an interesting explanation of how “bug” came into use.

So one less thing to keep bugging me! (Of course, now I have to wonder how we started using the word “bug” to mean “bother”.)