Skip to main content

Another Case for Positive Train Control

All the facts aren’t yet in – was this human error? mechanical error? electronic systems error? – but somehow the news of last week’s horrific train derailment in Philadelphia is made even worse when you repeatedly hear comments from authorities that this accident was preventable. What could have prevented this accident is Positive Train Control (PTC).

We have a special interest in this technology because we have a client that develops PTC systems, and uses a Critical Link SOM in its product.  In fact, after the New York commuter train derailment in late 2013, we had a post on the topic of PTC, in which we noted:

“By the end of 2015, such systems are scheduled to be in place throughout the US passenger and freight rail network. (There has been some pushback to extend this deadline – PTC systems are very complex to implement – but the recent NYC crash will likely push back on that pushback.)” (Source: Critical Link blog – December 18, 2013)

The Amtrak train was obviously one that was not yet equipped with PTC.

We also had an article in Embedded Computing entitled “M2M and embedded processing save lives in Positive Train Control”, which explains PTC, and gets into some of the technical detail on how Critical Link’s MityDSP-L138F is used in PTC communications managers.

We are proud of the role we play in this important technology. Railroad systems are complex. So is PTC: it can’t be implemented overnight, so it’s understandable that it takes time to get all the different segments of a system up and running.  But PTC can save lives. If something good can come out of the Philadelphia derailment, it may be that we take another hard look at this important technology.

Moore’s Law Turns 50

Two interesting and important things that I can think of happened in 1965.

One was the introduction of Moore’s Law.

Moore’s Law is based on an observation that Gordon Moore, who co-founded Intel, made that year in which he stated that, looking back over the hardware history, the number of transistors in an integrated circuit had doubled just about every year. Moore turned that observation into a prediction that was revised in 1975 to forecast that the doubling time would be two years. This has actually held up pretty well over the past five decades. One big reason why this has been the case is that the semiconductor industry adopted it as the target threshold for their R&D efforts. With all this behind it, working towards – and achieving – the goal of doubling capacity every two years made it something of a self-fulfilling prophecy for Moore’s Law. The Law also expanded beyond semiconductors to much of the digital world: memory, prices, etc.

Whether or not Moore’s Law caused all the technological progress, the fact that computer power increased so regularly and dramatically, while prices decreased so regularly and dramatically, is what has brought about our being able to hold a pretty-packed computing device in our hand (or wear it on our wrist) and use it to summon up our smart thermostat and whatever else is out there in the vast Internet of things.

The article I saw on this in Fortune a few weeks back sums things up pretty well:

“Take the example of social networking using a mobile phone. It works because the cost of a transistor has dropped a million fold and computing is about 10,000 times more energy efficient since 1980, when this writer first went to engineering school. Consequently, a $200 smart phone powered by a biscuit-sized battery contains a micro-chip with a few billion transistors in it and enough computing power to digitally process an image, and then upload and share it wirelessly using powerful mathematics to encode the data. This is a consequence of Moore’s Law in action.” (Source: Fortune)

Despite its 50 year track record, most believe that Moore’s Law may be starting to show its age and, in another ten years, will start to run up against some physical limitations. Once things get to the atomic level.

All very interesting.

As for another important thing that happened in 1965. Well, he may be turning 50, but he’s not planning on slowing down anytime soon…

Advanced processing for ADAS

Like so many things in life, cars are getting smarter. And all those smarts are requiring more and more computing power. Even not-so-smart cars run on embedded systems and complex electronics. As David Blaza pointed out in a recent post on Embedded, the 2011 Chevy Volt had 10 million lines of code in it.

For really smart cars, ARM has predicted that, by 2024, vehicles deploying Advanced Driver Assistance Systems (ADAS) will need a minimum of 100x more computing performance than 2016 models.

“Today, premium cars have more than 100 processors on board utilizing tens of millions of lines of code. To meet future ADAS demands, ARM expects processor performance compared to 2016 vehicles to increase 20x by 2018, 40-50x by 2020 and 100x by 2024. Meeting this ambition will require deeper functional safety support and higher performance, energy-efficient SoCs.” (Source: ARM)

To do their part, “ARM is licensing functional safety support across its Cortex-A, Cortex-R and Cortex-M processor families.”

As cars get smarter – more and more reliant on electronics and less and less reliant on the driver – functional safety becomes more and more important. Yes, a lot of what’s going into smart cars is on the entertainment-side, but that’s relatively minor compared to ADAS. It’s one thing when your music system fails an you can’t access your playlist. It’s quite another when the actual driving is systems-dependent. Malfunctioning systems are no longer an annoyance; they’re a clear danger.

Our partner, TI is one of the companies jumping on board, and will be licensing the latest ARM Cortex-A72 processor. (We don’t – at least for now – play in this space. But in case you’re wondering, Critical Link does have some Cortex-based SOMs. The MitySOM-335x, which is Cortex-A8, and the MitySOM-5CSX, which is Cortex-A9.)

Anyway, that 100x increase in compute performance required in less than 10 years caught my eye.

SureFlap and the Internet of Animal Things

I’m a dog person, so I’m not sure how this one caught my attention, but recently I ran across something about SureFlap, a British company that makes products that work off of your cat’s microchip, governing the cat door and feeding dish. (The products will also work for small dogs.)

Anyway, the story behind SureFlap is interesting, and not just for technical reasons.

It seems that, in the UK, cats are more apt to be outdoor animals than they are in the States, which means that a lot of folks have cat doors. But the problem with plain old-fashioned cat doors is that they don’t just let your cat in and out. They let any old cat into your house, which may make for some unwanted intruders for both your cat and you. There are automated cat doors out there, which, for the most part, they work off of RFID. You tag your cat’s collar and the “smart” cat door reads it.

Problem is, cats in the UK are less apt to be wearing a collar than their American feline cousins are. And Dr. Nick Hill, who’s a physicist in Cambridge, England, didn’t want his cat Flipper having to wear one, given that they can snag on branches and otherwise hang a cat up. So he came up with the idea of using Flipper’s microchip, which was embedded so that Hill could find Flipper if he got lost. To get a cat door to work with a microchip, Hill had to develop a reader that would work with a moving object (as opposed to a hand-held device that, say, a vet or animal officer might use to scan the embedded chip when the cat was being held still).

This all led to the first SureFlap Microchip Cat Door, which led to one for larger cats (and smaller doors), which led to one that would work with more than one cat. Eventually, Hill also used the technology to develop a pet feeder, as well.

And, this being the world of the Internet of Things, pet owners are also now able to follow the comings and goings of their cats.

If  you’re interested in reading more about this, the marketing message is on the SureFlap site, and the patent info on Nick Hill’s invention is here.

As I said, I’m a dog person, but I found the SureFlap story pretty interesting.

How the MityDSP L138+FPGA SOM is being used

One of the things that keeps life interesting at Critical Link is seeing how our customers are utilizing our system-on-modules in their products. While the MityDSP-L138+FPGA – part of our OMAP-L138 family of SoMs – has been available for a few years, our customers are still coming up with innovative ways to use it. Here are a couple of them.

RF-based position tracking: This customer wanted to focus their engineering team on what sets them apart in the marketplace, which is position tracking that uses Wi-Fi and is three dimensional. They left the background processing of the inputs to the Mity. The application consists of a target on the back of a tablet. You turn on the tablet camera and the app augments reality based on your location. You look “through” the tablet and see a different view of your world. The technology is very exciting, and there many different potential uses for it: gaming, surgery, architectural design, virtual tours. We’re looking forward to seeing where it all ends up!

We also have a customer that’s developing a visible light communications technology that delivers a high-speed, bidirectional networked, mobile communications. It does this in a way that’s similar to Wi-Fi. Essentially, it uses the visible light spectrum instead of radio frequencies to enable wireless data communication. Our SoM will be part of the data transmission system.

We’re excited about both of these uses of the MityDSP-L138, and proud to be behind the scenes in these leading-edge applications.

Where’s your e-trash is going? Trash Track knows

A while back now, I caught an episode of Inside Man on CNN. This is a show in which Morgan Spurlock goes behind the scenes to explore some topic from the inside out. (If the name sounds familiar, Spurlock’s the guy who, a decade ago, produced the documentary Supersize Me, which told the story of a month that he spent eating fast food.) Anyway, the topic of the show that I watched was what happens to the trash that the average America generates each day.

Spurlock traced three different types of junk: garbage which, if you live in NYC like he does, ends up in a landfill in Pennsylvania; recyclables, which – fortunately – get repurposed (but which, unfortunately, don’t include those plastic bags you get at the grocery store or CVS); and electronics.

It was the electronics tracking that most interested me.

To track where his old computers and other gear were off to, Spurlock hauled it all up to MIT, where the researchers at Trash Track tagged it so that they could follow it along. (Most of our e-waste is recycled in the US, but some of it goes to developing nations, where the regulation is non-existent. One of the purposes behind tracking e-trash is to help protect the poor workers (and the poor environment) from the toxins that are present in it.)

The group that does the trash tracking is part of the senseable city lab at MIT, which gets involved in a lot of interesting projects, from mapping the routes visitors take at the Louvre (and how long they spend in front of the Mona Lisa) to measuring air pollution in China.

With Trash Track, they tag a piece of the equipment (e.g., the board) then track where it goes by measuring the location, and gathering the data via the cellular network. The tags use:

“…GPS and CDMA cell-tower trilateration based on the Qualcomm inGeo™ platform in combination with Sprint’s cell phone network, utilizing Qualcomm’s gpsOne® technology to provide both accuracy and availability for position tracking applications like ours. Future generations of devices will work seamlessly across CDMA/GSM/UMTS networks, a feature that will allow tracking items across international borders.” (Source: senseable lab at MIT Trash Track).

To make sure the tags stayed “alive” long enough, they developed the ability to hibernate the tag, turning it on every couple of hours to grab the position data.

“…The tag also uses a motion sensor, which allows it to continue being in hibernation mode if no movement has been detected, thus further extending the battery life. If movement has been detected, the motion sensor wakes up the device to check and report its new position. Our algorithms vary the sampling rate in response to conditions sensed by the tag. In particular, the tag uses a set of orientation sensors to sense changes in position to increase the location sampling rate when the tag is apparently moving, and whenever previously unseen cell tower IDs are observed.”

Trash tags, themselves, are environment-friendly: they’re RoHS compliant.

Here’s the flow of how it works.

TrashTrack

 

 

 

 

 

 

 

In any case, Morgan Spurlock and Trash Track help remind us that we need to focus attention on just what happens to all that trash we toss out, once it’s out of sight.

The History of Software

Last Saturday, April 4th, marked the 40th anniversary of the founding of Microsoft, which is sure hard to believe. I vaguely remember when they were sort of the new kid on the block, back in the old MS-DOS days. No more: not only has Microsoft turned forty, but later this year Bill Gates turns the Big 6-0.

And speaking of the Big 6-0, software is celebrating its sixtieth anniversary this year, too.

At least that’s according to an infographic of the history of software that Capterra pulled together.

I would think that the date of the first software might be a bit harder to pinpoint than the founding of Microsoft, or what’s on Bill Gates’ birth certificate. But we’ll take Capterra’s word for it that 1955 was the date when someone came up with a replacement for punch cards.

They give credit to an outfit called the Computer Usage Company, which was the first company to bring off-the-shelf computer software to market. (Prior to that, there was software, but it was all pretty much custom code.) Computer Usage is no longer among us, having gone bankrupt in 1986. But there are a lot of familiar names along the way that are still around.

ADP was using mainframes to do payroll processing in 1957. (They’ve stayed true to their core mission over the years, haven’t they?)

SAP was founded in 1972; Oracle in 1977. Peachtree brought out the first accounting package in 1978, and Microsoft introduced Office in 1989 (ten years after Visicalc, the first spreadsheet was launched; no mention of Lotus, or Multiplan. Remember them? Wow, Mutliplan brings back many memories!)

Linux turns 24 this year, and the first web browser will be 22.

There’s a sidewalk on programming languages that takes us from Fortran (1956) to Ruby (1995). Not sure why, but programming languages dropped off the map – or at least off of the infographic – at that point.

Anyway, at Critical Link one of the big changes has been the shift from our embedded software being primarily developed for bare metal, or an RTOS of some type, (uC/OS, pSOS, vxWorks, MQX, and others) to where we are now with embedded Linux becoming the preferred OS for embedded products.

What all this tells me is that the history of software is still being written. It’ll be very interesting to see how it evolves from today forward!

——————————————————————————————————————-

For whatever reason, I wasn’t able to embed the infographic as anything other than a blurry eyechart. You can find it here. (And for the record, I initially saw a reference to this infographic on the BQE blog.)

 

 

That self-driving car in the rear-view mirror?

I saw a recent article in The New York Times in which Elon Musk (of Telsa fame) announced that by summer, his company would be bringing “autonomous technology” (i.e., the self-driving car) to market this summer.

“The technology would allow drivers to have their cars take control on what he called “major roads” like highways.”

“Mr. Musk said that a software update — not a repair performed by a mechanic — would give Tesla’s Model S sedans the ability to start driving themselves, at least part of the time, in a hands-free mode that the company refers to as autopilot.” (Source: NY Times)

Not that I have that long and terrible a commute, but there sure have been times when I wished that I had a car that I could put on “autopilot.” (If nothing else, I think that most parents have had moments on long road trips when they would have liked to have been able to hop into the back seat and referee!)

In any case, Mr. Musk may be getting ahead of himself.

For one thing, self-driving cars have not yet been legalized in most states, and, in the states where they have been given the green light, it’s for testing purposes only.

A Tesla spokesman did say that their new system will be legal, so it will probably be along the lines of the self-driving capabilities already available in some cars. They can do some self-driving, but the human-driver has to keep his or her hands on the wheel. (Which sounds a bit like Mr. Toad’s Wild Ride at Disneyland.)

Another issue is insurance. If a self-driving car gets in an accident, who’s at fault, the car manufacturer? The software developer? The human driver?

Anyhow, sounds like Tesla is on its way, more or less:

Mr. Musk said on Thursday that Tesla had been testing its autopilot on a route from San Francisco to Seattle, with company drivers letting the car navigate the West Coast largely unassisted.

After the software update this summer, the cars can also be summoned by the driver via smartphone and can park themselves in a garage or elsewhere, he said. That feature, though, will be allowed only on private property for now, he said.

Certainly, the day is coming when there will be self-driving cars, and the day may be closer than we miht think.

After all, rapid transit systems have had driverless trains for years. (Most of the ones operating in the U.S. are in airports.)  And in you think about planes, that’s where the word autopilot comes from, no?

It will be interesting to see how rapidly this all plays out. As an engineer, I find the idea intriguing. But this is complex, high stakes technology, and, for now, I think I’d only be comfortable with it if it comes with manual over-ride. I’m not quite ready to sit there like a crash-test dummy…

Anyhow, that self-driving car in the rear-view mirror may be appearing sooner than you think.

A Quantum Leap in Light Photography

It seems as if, every time you turn around, there’s something extraordinary happening in the technological and scientific world. In this instance, it was my colleague Matt Cook who turned around and saw an article by Lucy Ingham on Factor-Tech entitled.  “Light Photographed as a Wave and a Particle for the First Time Ever.”Lightwave photograph

The article calls this a “momentous achievement,” which it sure is, given that scientists had, up to now, only been able to observe one behavior of the other, not both simultaneously.

“The dual behaviour of light, which is demonstrated through quantum mechanics and was first proposed by Albert Einstein, was only possible to capture by scientists at École polytechnique fédérale de Lausanne (EPFL), Switzerland, due to an unorthodox imaging technique.”

“The scientists generated the image with electrons, making use of EPFL’s ultrafast energy-filtered transmission electron microscope. This gave them a rare advantage over other institutions, as EPFL has one of only two such microscopes in the world.”

“The image was achieved first by firing a pulse of laser light at a miniscule metallic nanowire, adding energy to charged particles in the nanowire and making them vibrate.” (Source: Factor-Tech.)

All this, of course, sounds very abstract and out there. As in, what’s this got to do with us:

“’This experiment demonstrates that, for the first time ever, we can film quantum mechanics – and its paradoxical nature – directly,’” said research leader Fabrizio Carbone.

“However, the research could also be important for the future development of quantum-based technology.”

“’Being able to image and control quantum phenomena at the nanometer scale like this opens up a new route towards quantum computing,’” he added.”

Okay, you’re probably still asking what all this has got to do with us. Quantum computing is still pretty new, but as it matures, it will be able to perform computations and analysis far more quickly than anything that traditional computers can do. It will be of special interest in the security world, since it will be able to do crypto-graphic analysis orders of magnitude more rapidly than it’s now done.

So, nothing we have to spend much time thinking about, other than to reflect on how cool it is. And how amazing that it’s taken a 100 or so years after Einstein proposed it for the technology to be in place to actually observe it.

——————————————————————————————————————

We don’t talk quantum all that often, but we did in a post last November, Schrodinger’s Cat (sort of).

Apple Watch: maybe just a gadget, but a cool one

Like pretty much everyone else who likes tech gadgetry, I had my eyes on Apple’s recent “media event” showcasing the Apple Watch. CNET’s Scott Stein had a long and detailed review of it, which you can find heApple Watchre, and most of what I’ll cover in this post is based on what I read there. I’ve pulled out the key points (and/or what I was most interested in), so here goes.

The Apple Watch is a smartwatch, another entry into the wearable tech market. As a smartwatch, it promises to be pretty darned smart. It’ll play your music, track your fitness, send and receive messages, make payments (via Apple Pay), and operate the smart devices in your home (e.g., a Nest Thermastat). It communicates with your iPhone over Wi-Fi and Bluetooth (the iPhone is the conduit to other smart devices and functions like GPS; and note: it won’t work with an Android phone, just an iPhone 5 and over). And, yes, as it tells time, you can actually use the Apple Watch as a watch.  The promised battery life is 18 hours, and the prices will range from $349 on up to a rather stratospheric $17K for a fully-loaded version in a 18-karat gold case. (I think I’ll take a pass on that one.)

I’ve grabbed a couple of paragraphs from Scott that offer more technical details:

“The watch runs on a brand-new S1 processor, is equipped with a gyro and accelerometer, and can piggyback on the Wi-Fi and GPS from your phone. You press down on the crown to get to the home screen. The watch will take dictation and offers very precise synchronized time to plus or minus 50 milliseconds. It also has a “Taptic” haptic processor that offers a subtle vibrational feedback for notifications, alarms and other messages, plus a force-sensitive touch display.”

“Like the iPhone 6, the Apple Watch has NFC. This will enable those Apple Pay payments and help it act as a door-opening key at hotels.”

Like everything else that Apple touches, there are bound to be tons of apps built for it, which will make it more interesting. But not interesting enough for me to spring for one. Even with all the applications, the Apple Watch is just a gadget, and the form factor is pretty limiting. Much of the heavy app lifting occurs on the iPhone, which you’re just as apt to have with you as you are the watch. So you don’t need the watch as an intermediary.

There is good one case that I can think of, however.

Now that the software is more mature (originally, it was a TI development kit for the MSP430), for the past few days, I’ve been putting my MetaWatch back on my wrist. For someone like me who keeps his phone in silent mode, it’s very handy to get a gentle vibration on the wrist when a call comes in. This keeps me from missing calls. Based on this little experiment, even though smart watches are just gadgets, I think it will be worth it to get an Android Wear watch when the next generation comes out.