Skip to main content

Firefighting and the IoT

With a son who’s been a volunteer firefighter and EMT since he’s been in high school and is now a professional engineer working in fire equipment sales and support, you can best believe that I was interested in a recent article in Forbes entitled “How The Internet Of Things Can Help Firefighters Save Lives” by Federico Guerrini.

He notes that, with over one millions fire each year in the U.S., which result in thousands of lives lost and billions of dollars in property damage, it may be time to get smarter about firefighting.

Researchers from the National Institute of Standards and Technology, and the Fire Protection Research Foundation believe that at least part of this heavy toll could be reduced, if fire brigades, instead of relying heavily on the experience and judgment of the incident commander–as it often happens–could base their decisions on the data systematically and scientifically collected on the scene.

Their joint report, “Research Roadmap for Smart Fire Fighting” contains several examples of how enhanced data gathering, processing and delivery could transform traditional fire protection and firefighting practices, combining the points of strengths of what we are getting used to call the “Internet of Things” (in its various declinations) with the big data analytics.(Source: Forbes)

Smarter firefighting would rely on the technology that we’re very familiar with: wireless networks, sensors embedded in the equipment and clothing of responders, imaging from cameras to identify the location of people in a burning building, data from building control systems providing information on hotspots and chemical leaks…

Drones are another technology that has potential. They’re already in use for monitoring wildfires, why not indoors? And:

Robots, equipped with a wide array of chemical sensors, on-board cameras, and lasers, will be used to gather data in risky environments, where humans fear to tread.

All this technology is not going to be put in place overnight. Equipment would have to be purchased and, especially for small town volunteer outfits, funding is always an issue. And, once equipped, firefighters would have to be trained (and sometimes untrained from their old ways).

Then there are regulatory issues to be considered, especially where technology used by “civilians” could get in the way of emergency operations. Easy to imagine locals flying drones over a fire scene, isn’t it?

Another possible issue could be that of information overload. First responders might find themselves “task saturated” by everything – sensors, images, cameras – being thrown at them, and push back, as it happened when the first mobile data computers were deployed: fire officers would ignore them, or in some cases turn them off, preferring to rely on older, tried and true methods instead.

All these considerations aside, smart firefighting equipment will be coming and, as far as I’m concerned, it can’t get here fast enough. One more way in which the IoT really can make life better.

Ready for IPv6

In case you missed the news, the North American well has finally run dry on IP (IPv4 – 32 bit) addresses. What’s been happening for the past few years in Asia, Europe, and Latin America – i.e., the rationing of addresses that make sure that smartphones, tablets, laptops, desktops, servers, and any piece of equipment with a computer on board has a unique identifier for communicating with the Internet  – is now hitting closer to home. As of a few weeks ago, ARIN (the American Registry for Internet Numbers) started scrimping on numbers, offering smaller blocks at a time.

When IPv4 was introduced, it seemed like those 4.3 billion possibilities would last a good long time. But that was then, and this is now. Last fall, Gartner predicted that, in 2015, the number of connected devices comprising the Internet of Things would hit 5 billion. By 2020, forecasts range from 20 billion, to 50 billion, to 200 billion. (To infinity and beyond!)

In case you’re wondering why we haven’t run out of addresses already, many have already made the switch to IPv6 (128 bit addressing). Google, for one, did so in 2012.

Critical Link has also made the leap. All of our Linux-based SoMs have IPv6 support available. This includes the MitySOM-5CSx, MitySOM-335x, and MityDSP-L138(F) families, as well as our new 5CSx-based MityCAM cameras.

And in case you’re wondering whether we’ll be running out of IPv6 addresses anytime soon, not to worry. There a 340 trillion, trillion, trillion unique combinations. Even if the wildest forecasts for IoT materialize, this should hold us for a while.

———————————————————————————————————————————————–
If you’re interested in reading more on this topic, here are a couple of links to posts on this subject. (Some of the information used in my post was taken from these sources.)

“We’ve finally hit the breaking point for the original Internet” by Brian Fung, The Washington Post.

“I’ts official: North America out of new Ipv4 addresses” by Iljitsch van Beijnum, on Ars Technica.

And a shout out to Critical Link’s Mike Williamson, who spotted this for us.

Proof of extra-terrestrials? Not this time.

I’m always on the lookout for stories about the different ways image processing is put to work, but an article I saw a couple of months back in Vision Systems Design was one of the most interesting. It seems that a group of UFO researchers had come across some slides that supposedly showed an alien that had been found in the late 1940’s at the site of the Roswell, New Mexico “flying saucer” crash. In reality, what crashed was an Air Force surveillance balloon, but many still maintain that it was a visitor from outer space. And in reality, what the UFO researchers were hoping was proof of alien life was the mummy of a child.

The debunking was made possible by a sleuth who was able to clear up the image’s fuzziness. What was found were the labels “Mummified Body of Two Year Old Boy” and “San Francisco Museum.” While I guess it’s conceivable that an alien brought signage along in its spaceship, as it turned out, sometimes an extraterrestrial is just a down-to-earth human mummy.

The person who made this discovery did so with SmartDeblur, an application that uses the blind deconvolution technique. As SmartDeblur’s web site claims, their “program works extremely efficiently and doesn’t require any specific skills.” While using Smart Blur may not “require any specific skills”, you’ll want to have some math skills if you’re going to go through the algorithm, which is explained by Vladimir Yuzhikov (whose program SmartDeblur is) here. May be a bit more math that you’re up for on a nice summer’s day, but you may want to give it a look.

As far as I know, none of Critical Link’s imaging products are used for UFO debunking. We’re more likely to be involved in machine vision, traffic systems, surveillance, Raman spectroscopy, industrial inspection, scientific imaging, low light imaging, etc.

But you never know when a more unorthodox application may come up.

 

MTBF for the MitySOM-5CSx (Cent’Anni)

At Critical Link, we’re very committed to quality. The applications our system-on-modules are used in aren’t trivial ones. They’re industrial, scientific, medical, telecommunications, defense, transportation… Quality, performance, and longevity all matter. Case in point illustrating our commitment are the results of some testing that one of our customers did on the MitySOM-5CSx. This SOM combines the Altera Cyclone V System on Chip (single or dual hard-core Cortex-A9 applications processors tightly integrated with FPGA fabric), memory subsystems, and onboard power supplies. It’s used for applications that require high throughput.

The customer that sent us their test results is part of the transportation industry, and the MitySOM-5CSx is being used in their Positive Train Control system. It goes without saying that this system is mission critical.

In doing their testing, the customer went through different use cases, modeling all three of the environments in which the MitySOM-5CSx would be deployed: an air-conditioned technical room; a non-air-conditioned technical room or indoor wayside; an outdoor wayside.

Scenario One Scenario 2 Scenario 3
Failure per Hours 8,92438E-07 1,12526E-06 9,85867E-07
MTBF In Hours 1,120,527 888,683 1,014,335
MTBF in Days 46,689 37,028 42,264
MTBF in Years 128 101 116

 

There are many assumptions that went into these tests, and I’m not going to get into them here. But based on these results, we’re able to say that, as a general rule of thumb, the MitySOM-5CSx has a Mean Time Between Failure (MTBF) of 100 years.

Part of my heritage is Italian, and there’s an Italian toast, cent’anni, which expresses wishes for 100 years. I’m pretty proud that with the MitySOM-5CSx, Critical Link is achieving this.

What’s for dinner? Whatever robo-chef’s making

Before my wife and I know it, we’re going to be empty-nesters, and that will mean a bit less pressure to get a good meal on the table every evening. I was thinking that maybe we’d start eating out more often, but now I’m thinking this may be as good a time as any to start thinking about a robotic chef. And what got me thinking “robotic chef” was an article I saw last month in The Economist.

A prototype of  a “Top Chef” level robot cook designed by Mark Oleynik was recently exhibited at a fair in Hanover, Germany.

There are some pretty smart food processors out there already, but Olyenik’s robo-chef is a bit more humanoid:

“A pair of dexterous robotic hands, suspended from the ceiling, assemble the ingredients, mix them, and cook them in pots and pans as required, on a hob [Brit for stovetop] or in an oven. When the dish is ready, they then serve it with the flourish of a professional.”

…”The machine’s finesse comes because its hands are copying the actions of a particular human chef, who has cooked the recipe specially, in order to provide a template for the robot to copy. The chef in question wears special gloves, fitted with sensors, for this demonstration. Dr Oleynik’s team also shoot multiple videos of it, from different angles. These various bits of data are then synthesised into a three-dimensional representation of what the chef did while preparing the dish. That is turned into an algorithm which can drive the automated kitchen.” (Source: The Economist)

When the product is released – they’re aiming for GA in 2017 – there’s going to be an online library of 2,000 recipes from celebrity chefs that owners can download. Eventually, folks will be able to add their own family recipes to the mix.

Oleynik’s company, Moley Robotics, will be bringing this robo-chef to market. Click through on the Moley link for a very cool intro demonstrating how it works. Unfortunately, the equipment page is under construction. Too bad: I was hoping to be able to drill down on the technology. Some people like to kick the tires. Me, I like to take a look at what’s on the inside.

I did find a bit more info on GizMag, where I learned that the robo chef uses:

  • 20 motors
  • 24 joints
  • 129 sensors

That’s a lot of technology, but, of course, it’s needed in order to replicate what a human hand (powered by a human brain) is doing. There are 27 bones in a human hand, so that’s 54 moving objects right there…

With a price point of $15K, the robo-chef won’t be coming any time soon to the Catalino kitchen. But it sure is interesting.

Improving automotive safety. (I’m all for it.)

We seem to have our share of automotive-related posts on the Critical Link blog, but that’s because there’s so much interesting technology going into cars these days. And it’s technology that most of us will use, or at least have access to, at some point. Even those of us who really enjoy driving may not mind letting technology take over once in a while, say for having the car parallel park itself in a tight spot.

One aspect of automotive technology I read about recently takes on the distracted driver problem, which is getting worse as the number of distractions is on the rise. (Remember back in the day when the only distractions were turning on the radio and making sure your kids didn’t unbuckle their seat belts?)

Most American drivers admit to using phones in their cars, and plenty of those folks aren’t just talking, they’re texting and doing everything else you can do with a smartphone. Not surprisingly, studies have shown that phone use is a major contributor to accidents.  Much as we’d like to think that good behavior – i.e., forgoing any phoning-while-driving – will solve this problem, we all know that what we need is making using the phone while in the driver seat safer.

In April, a group of researchers at Carnegie Mellon presented their research on distracted drivers at the Conference on Human Factors in Computing Systems.

“Dr Kim, Dr Chun and Dr Dey recruited 25 volunteers, aged from 19 to 69, to make road trips about 20km long. Every volunteer wore five motion sensors—one on each wrist and foot and one on his head—as well as a chest strap that recorded his breathing and heart rates. The car was fitted with an inward-looking camera to observe the volunteer’s “peripheral actions” while driving, such as eating, fiddling with the radio, turning on the windscreen wipers and steering one-handed. A second camera faced outward, to assess the state of nearby traffic. And a “black box” recorder took readings from the car itself, such as the throttle position, slope of the road and engine speed.” (Source: The Economist)

The researchers determined what was going on when drivers were showing signs of stress – e.g., faster heart rates – and when they were doing something (like adjusting the radio) that is peripheral to the main action of driving. They then figured out what might be reasonably safe to do (adjust the radio, listen to a voice message) under what conditions.

“Obviously, the average driver is not going to want to wear body sensors all the time. But as the team report at the conference, they have used the data they collected to write a piece of software which can, based on inputs from the black box alone, identify the safest moment for an interruption from a phone with 92% accuracy. Moreover, Dr Kim says, a future version of the software could be tweaked to add in personal preferences. One driver might prefer to be interrupted when on a straight, flat road, for example. Another might like to wait until he was stopped at a red light.”

(The report, which contains some interesting technical information, is linked in the paragraph above. A few points of interest: The cameras used were smartphone cameras. An Onboard Diagnostic device provide the info on what was going on with the car itself, e.g., position and speed), transmitting the data to one of the smartphones via Bluetooth. YEI 3-Space sensors were placed on the research subjects’ forehead, wrists, and feet. A Bio-Harness chest belt captured cardio and respiratory data.)

The article also mentioned a couple of other distracted-driver safety initiatives.

Especially as a father of three  including a relatively new driver –  I’m all in favor of anything that improves automotive safety.

Play Ball!

I’m not much of a baseball fan, but I do know that it’s the most data-intensive sport, and has been recording just about everything that can be recorded for an awful long time.  And now Major League Baseball is going even further with the statistics it’s gathering by introducing a system called Statcast.

“Statcast can, pretty much, follow and record everything that happens in a baseball game… It calculates the speed and curvature of a pitch, how rapidly the ball spins and around what axis, and how much faster or slower than reality that pitch appears to be to the hitter, based on the length of the pitcher’s stride. When the ball is hit, the system measures how quickly it leaves the bat and how its path is affected by atmospheric conditions. It then tracks how long fielders take to react before moving, and the efficiency of their routes to the ball’s eventual landing spot. And it takes just 15 seconds to crunch these numbers and integrate them with video recordings.” (Source: The Economist)

While I’m not that interested in baseball, I am interested in the technology underlying Statcast, which uses two different pieces of equipment, one that keeps its eye on the players, while the other keeps its eye on the ball.

ChyronHego’s TRACAB System is what’s following the players. TRACAB’s image-processing technology is used:

“…to identify the position and speed of all moving objects within arena-based sports, and does this uniquely in true real-time. The resultant live data of highly accurate X, Y and Z coordinates is supplied 25 times every second for each and every viewable object, whether they are players, referees or even the ball.” (Source: ChyronHego)

Because white balls are difficult to track against a white background (i.e., fans in the stands wearing white shirts), they need different technology there. Enter TrackMan, which “tracks 27 different data points per play,” including pitch metrics like velocity and spin, and hit metrics like exit speed from the bat and hang time. (Source: TrackMan Baseball)

Because baseball it televised, Statcast also deploys an additional camera that looks at :

 “…the field as a whole, providing co-ordinates that map the radar and optical data onto broadcasters’ video feeds. And the result does indeed make for compelling TV. It permits commentators to illustrate replays with dazzling visual displays.” (We’re back to The Economist here.)

You can see what that looks like here.

Anyway, with the baseball season in full swing, I thought people might be interested in reading a bit about Statcast. Just wish there was more drill down info on the technology. That’s the kind of data I’m really interested in.

“Interesting info if you’re a geek/engineer”

A few months back, Tim Iskander, one of our engineers, sent out an all-hands e-mail with a link to TI’s Analog Engineer’s Pocket Reference. The subject line was: “interesting info it you’re a geek/engineer.” The body of the e-mail was pretty much the link and a note that read “If you’re not [a geek], then sorry for the spam. (But, hey…I think you’re out-numbered here!).

Tim was right about the non-engineers being outnumbered at Critical Link. And he was right about this guide being of use to engineers.

In the preface, Art Kay and Tim Green, the editors, write:

“This pocket reference is intended as a valuable quick guide for often used board- and system-level design formulae. This collection of formulae is based on a combined 50 years of analog board- and system-level expertise. Much of the material herein was referred to over the years via a folder stuffed full of printouts. Those worn pages have been organized and the informa-tion is now available via this guide in a bound and hard-to-lose format!”

I think that Pocket Reference is something of a misnomer. The Reference is nearly 100 pages long. But if you put it on thumb-drive, you could stick it in our pocket. And it would be useful to have at your fingertips, as it covers a lot of the basics. The areas covered include:

  • Key constants and conversions
  • Discrete components
  • AC and DC analog equations
  • Op amp basic configurations
  • OP amp bandwidth and stability
  • Overview of sensors
  • PCB trace R, L, C
  • Wire L, R, C
  • Binary, hex and decimal formats
  • A/D and D/A conversions

Not that any of us Critical Linkers would have to look any of this stuff up. Still, it’s handy to have it all in one place. Just in case.

So thanks to Art and Tim at TI for pulling this together, to TI for making it available to geeks/engineers, and to our own Tim Iskander for passing the info on.

Big news on the semi-conductor front

There’s been so much change in the semiconductor world these days, it’s hard to keep up with it all. Last week, it was the Avago-Broadcom deal. This week, with the agreement that Intel will acquire Altera, the wheelings and dealings are coming a bit closer to home.

Altera has been a partner of ours for the past two years. Our MitySOM-5CSX features the Altera Cyclone V SOC, which combines FPGA logic and a dual-core ARM Cortex-A9 processor subsystem.

Obviously, with this acquisition, we’re talking about a far different order of magnitude than our partnership with Altera, but Intel is interested in Altera for the same reasons that we are. One of them is FPGA.

“Intel is known for general-purpose microprocessor chips that can be programmed to perform a near-infinite variety of computing tasks. Meanwhile, some kinds of operations-such as converting a video from one format into another or encrypting data so unauthorized people can’t read it-can be executed more quickly using circuits custom-tailored for the task.

“But the cost of making chips from scratch has risen steadily over the years, and fewer companies now do so. Altera, along with rival Xilinx Inc., popularized a middle path with chips called field-programmable gate arrays, or FPGAs, that are configured by customers to handle certain kinds of jobs after the chips leave the factory.

“Altera’s programmable chip technology is widely viewed as a way for Intel to protect its stronghold in selling chips for servers, a market that generated more than half of Intel’s operating profit in the March-ended quarter. Companies have been using FPGAs alongside Intel’s Xeon chips to help speed up their servers, and some analysts believe that Intel needs to have an internal source of the technology to respond to the trend.” (Source: Wall Street Journal)

Another reason is ARM, which has been making inroads into the data center. The good news for Critical Link is that Intel will continue to support Altera’s ARM technology products.

I think that this acquisition will be all for the good. The possibilities for other devices that will be developed as a result of Intel’s acquiring Altera should be very, very interesting. In the embedded space we’re particularly interested in seeing what innovation the Altera acquisition will bring to the Intel Atom and Quark product lines.

 

————————————————————————————————————————–

 

(If you’re interested, you can find the press release here.)

Is it just me?

I recently had a diagnostic ultrasound, and while I was laying there on the table looking at the fuzzy, grainy image, something occurred to me. And that something was that the image quality doesn’t seem to have improved all that much over the last 26 years. I can pinpoint that “26 year” thing because the first ultrasound image I ever looked at was that of my oldest child.

Back then, we could easily overlook a lot of fuzziness, graininess, and black and white images. All we were interested in was looking at our baby. But it is a bit surprising that, with all the new medical technology out there, run of the mill ultrasounds are still being done the same way they were way back then.

Or is it just me? I guess it could be that the imaging has improved, but my eyesight isn’t quite as sharp as it was when I first saw my son. Or is this really the case that the technology hasn’t budged, even though sensors have sure gotten a lot more powerful over the past few decades?

Maybe it’s a case – which (I guess) we’d all like to see more of – of ‘if it ain’t broke, don’t fix it.’ If ultrasound imaging is cost-effective and low risk, why use more expensive (and potentially hazardous) technologies like MRI and CT scans.

Meanwhile, we will be having a doctor in the family – our second ultrasound child is on her way to becoming a veterinarian – so I think I’ll check in with her to see what the thinking is on whether ultrasound imaging has changed all that much. It may just be that it really doesn’t need to. Or it could be that it’s just me that couldn’t resist doing a critique of the diagnostic technology being used on me!