Skip to main content

Lots of technology behind the moon landing

With the celebration of July’s 50th anniversary of the moon landing, I’ve been reading a lot about Apollo 11, especially the technology involved in it all. What’s most interesting is how primitive by current standards the technology that got astronauts Neil Armstrong, Buzz Aldrin, and Michael Collins to the moon, and Armstrong and Aldrin onto it.  And the fact that so much of that technology had to be built from scratch.

Obviously, there was something to go on. Rockets in the United States had been in serious development since World War II. (Or since the 1920’s, if you count the foundational work of Dr. Robert Goddard, who launched the first liquid fuel rocket in Worcester, Massachusetts, in 1926.) And both the Soviet and American space programs had been putting men into space since 1961.

…but getting a rocket powerful enough to not only escape Earth’s gravity, but also to propel a spacecraft more than 384,000 kilometres, was going to require some innovative thinking. 

“The truly amazing thing is that nothing existed. Every single thing had to be built from scratch and to spec,” said Erin Gregory, assistant curator at the Canada Aviation and Space Museum. “The Saturn V rocket is one of the most underappreciated aspects — I think — of the Apollo program, in the amount of time and effort that had to go into creating this mammoth rocket.”

While the rocket required to launch two astronauts into orbit around Earth in the Gemini program — which ran from 1961 to 1966 — used roughly 500,000 pounds of thrust, the Saturn V [used for the Apollo 11 moon landing] would require much, much more than that: 7.5 million pounds. (Source: Canadian Broadcast Company)

The computing power in the Saturn V wasn’t much – a fraction of what we’re all carrying around in our smartphones. And that computing power (and the computing power on the ground – remember those giant, room-filling/building-filling computers?) needed to be a lot more robust than what had been needed to orbit around the earth. But “software was in its infancy, as were semi-conductors and computer chips which, until Apollo, didn’t have much of a market.”

One bit of the moon landing technology I wasn’t familiar with was the memory. Chris Gainor, an Apollo historian, had this to say about how “memory was put into the Apollo computers”:

“The hardwired memory had to be established well in advance of the flight, because they had these large groups of women who had to knit. They had to literally knit these wires together in certain ways.” 

The lunar module itself required technology beyond what had been used in the earlier space modules. Those modules just had to get the astronauts back to earth. Apollo 11 needed a separate module to get Armstrong and Aldrin onto the surface of the moon and back to the command module helmed by Collins. So, a lot more computing power.

I also saw an article by Cliff Saran in Computer Weekly that had additional lowdown on the Apollo 11 technology. 

The so-called Apollo Guidance Computer (AGC) used a real time operating system, which enabled astronauts to enter simple commands by typing in pairs of nouns and verbs, to control the spacecraft. It was more basic than the electronics in modern toasters that have computer-controlled stop/start/defrost buttons. It had approximately 64Kbyte of memory and operated at 0.043MHz.

Just think of this next time you pop an English muffin in your toaster!

The astronauts could program the hardware using “a small set of machine code instructions.”

The AGC program, called Luminary, was coded in a language called Mac, (MIT Algebraic Compiler), which was then converted by hand into assembler language that the computer could understand. The assembler code was fed into the AGC using punch cards.

Amazingly, the code listing for the AGC program can be downloaded as a PDF file. There is also an equivalent program for the lunar lander.

The AGC was designed to be fault-tolerant and was able to run several sub programs in priority order. Each of these sub programs was given a time slot to use the computer’s sparse resources. During the mission the AGC became overloaded and issued a “1202” alarm code.

Just seconds prior to the moon landing, Neil Armstrong alerted Mission Control to that error but was assured that he could ignore it. (Good to know that some things haven’t changed over the decades.)

Experts cite the AGC as fundamental to the evolution of the integrated circuit. It is regarded as the first embedded computer.

Those early astronauts were all “fly boys” – pilots whose preference would have been to control their “ship” manually, but the precision required for the moon mission required a level of precision that only the use of a computer. Fortunately, those “fly boys” were also engineers!

On the ground, the computers were provided by IBM – their mainframe workhorses System/360 Model 75s. One of these System/360’s:

…was used by Neil Armstrong and Buzz Aldrin to calculate lift-off data required to launch the Lunar Module off the Moon’s surface and enable it to rendezvous with Command Module pilot Michael Collins for the flight back to Earth.

At the time, IBM described the 6Mbyte programs it developed, to monitor the spacecrafts’ environmental and astronauts’ biomedical data, as the most complex software ever written.

That was then and this is now, and we all have “man on the moon” technology in our pockets and on our desks.  And a lot of it started with the “scratch” technology used for Apollo 11. Just imagine what they’ll be coming up with for a human mission to Mars, which NASA is planning by 2033.

Don’t know about you, but I never get sick of car talk.

I’m a car guy. I’m also a tech guy. So when I see a story that combines car and tech, I’m always interested – even if, these days, such stories do seem to come fast and furious.

The recent article-of-interest appeared last week on EE Times. It reported on two pitches writer Nitin Dahad had recently heard from a couple of early-stage car-tech companies. One of these organizations, the UK’s Academy of Robotics is focused on last mile autonomous delivery. They’re developing an AI-based vehicle called Kar-go. Among the onboard technologies are “a machine-vision system designed by members of the team who worked on the Mars rover,” and “a package-sorting mechanism” that optimizes drop-offs. Kar-go sports an Nvidia Drive supercomputer. (Remember when supercomputers were so crazily expensive that there were only a few of them…) There’s also a Tesla battery, plus plenty of other tech goodies:

Its modular driverless delivery technology combines continuous-time recurrent neural networks (CTRNNs) with CNNs (convolutional neural networks) and long short-term memory (LSTM) to create a top-level controller system which can ‘learn’ from the past, ‘perceive’ its environment and make any necessary corrections. This complex hierarchical system can run on a single NVIDIA GPU-equipped desktop.

The beauty of the Kar-go is that each one will be trained to “become an expert in its particular route, so it can then focus on any abnormalities in that route,” and it will be assigned to work in its own area.

While this is all very interesting, I was more intrigued by the modular universal EV platform that’s coming out of REE, an Israeli startup.

The company integrates all of the components formerly found under the hood into the wheel.  It says it offers optimal freedom of design, and the potential for multiple body configurations on a single platform.

Putting the motor, steering, suspension, drivetrain, sensing, brakes, thermal systems and electronics in the wheel leaves a ‘flat’ platform. This provides a low center of gravity to maximize efficiency and supports the vehicle’s agility and stability. The design also drastically reduces its footprint, weight, and improves both energy-efficiency and performance — aspects crucial for electric and autonomous vehicle development…

Based on a novel quad-motor system, and including active height-levelling suspension, steer-by-wire and a smart quad-gear box, the technology provides the basis of any type of vehicle from a high performance car able to do 0-60 mph in less than 3 seconds to an off-road SUV with advanced active suspension technology. The platform can also be used as the base of a robotaxi or even a 10-ton cross country truck.

Sure, with the advent of electronics, it’s been a while since a tinkerer would pop the hood to take care of so many of their car problems.  Old-time VW Beetles used to have the engine in the rear, but that’s the only radically different car “innards” design that I can think of, and that’s not all that radical.

There have been other companies putting motors in the wheels of EV’s, but this is the first one I’ve seen that’s putting steering, suspension, drivetrain, brakes, etc. in there as well.

This looks like a game-changer to me.

 

Thanks for the memories, implants

For a number of years, researchers have been working on brain implants that will help with memory loss. Now it’s been reported that devices that can restore memory-generation capability for those who’ve suffered traumatic brain injuries are closer to becoming a reality.

In a grainy black-and-white video shot at the Mayo Clinic in Minnesota, a patient sits in a hospital bed, his head wrapped in a bandage. He’s trying to recall 12 words for a memory test but can only conjure three: whale, pit, zoo. After a pause, he gives up, sinking his head into his hands.

In a second video, he recites all 12 words without hesitation. “No kidding, you got all of them!” a researcher says. This time the patient had help, a prosthetic memory aid inserted into his brain. (Source: Bloomberg)

The device used in the Mayo experiment, which was developed by Penn professor Michael Kahana in conjunction with Medtronic, deploys sensors in the brain “to measure electrical signals.”

If brain activity is suboptimal, the device provides a small zap, undetectable to the patient, to strengthen the signal and increase the chance of memory formation. In two separate studies, researchers found the prototype consistently boosted memory 15% to 18%.

Another group, a collaboration between Wake Forest Baptist Medical Center and the University of Southern California, has used a different approach to memory retention. Their results were even more impressive than those achieved at Mayo, showing improvements of over one-third.

To form memories, several neurons fire in a highly specific way, transmitting a kind of code. “The code is different for unique memories, and unique individuals,” [lead author Robert] Hampson says. By surveying a few dozen neurons in the hippocampus, the brain area responsible for memory formation, his team learned to identify patterns indicating correct and incorrect memory formation for each patient and to supply accurate codes when the brain faltered.

Neither of these devices are ready for prime time quite yet. So far, the experiments have been conducted on those who already have electrodes implanted in their brains. (These are epilepsy patients, and the electrodes have been used to monitor seizures.) The implants rely on “clunky external hardware that won’t fit in somebody’s skull.” But there’s work underway to make devices that are actually small enough to embed in someone’s brain.

The U.S. Defense Advanced Research Project Agency (DARPA) funded both of these projects, and the first people who the emerging technology will be used on are veterans.  With all the traumatic brain injuries suffered by our military personnel over the last couple of decades, there’s no one more worthy of benefitting from it. Up next will be Alzheimer’s sufferers and stroke patients.

I know I’m something of a broken record here, but nothing makes me prouder of our profession than the work we do to improve and save lives.


Illustration: Jinhwa Jang for Bloomberg/BusinessWeek

Ro, ro, ro your boat

I’ve had an interest in robotics, dating back to my grad school days. So naturally I was intrigued by a recent article I saw on Futurism about how MIT scientists, funded by the city of Amsterdam are building “self-piloting” mini-barges — “Roboats” — “to cruise its canals and make better use of waterways that have fallen out of favor for roads.”

Amsterdam has been using the latest and greatest technology since the city was founded in the 13th century. Then the technology was for the dam building that gave the city its name and enabled the city to come into being. A few centuries later, technology was put to use for the development of their canal system. More recently, Amsterdam has become a leader in flood control.

While canals have long been an important part of Amsterdam, with the advent of the automobile they were no longer required for the commercial uses – transport of goods – they were built for. During the 20th century, a number of the city’s canals were filled in to create roadways and parking spaces. The canals were largely used for pleasure boats and tourists.

These days, the world is no longer so enamored of roadways and parking spaces, and Amsterdam is looking for a way to better utilize their canal system. And that better way is a Roboat.

Now, grad student Luis Mateos has built a new algorithm that directs the roboats to automatically latch together, according to an MIT press release. The boats can currently combine into autonomous barges, but Mateos sees a future in which autonomous boats could create temporary bridges or pop-up spaces that make waterways more accessible without requiring new city infrastructure…

For now, scientists are working with 15-square-foot prototypes, one quarter the size of an envisioned final product. These scaled-down roboats could join together to carry garbage and other cargo down the canals, while larger models could do more.

I checked out the Roboat site, and they had some information on the underpinnings behind their use of “self-driving technology to change our cities and their waterways.”

Roboats, of course, require vision technology so that the barges can avoid collisions and so that they can successfully dock together. For this, Roboat uses “a LiDAR time-of-flight sensor and a camera to view its surrounding environment. Perception methods such as clustering and neural network classifiers are used on sensor readings to recognize objects in the canal environment. “

Then there’s motion planning with obstacle avoidance.

The motion planner always considers the boat dynamics and geometry, canal boundaries, and obstacle dynamics simultaneously to calculate out the optimal obstacle-free path for the Roboat in real time. The motion planner will become even intelligent by actively learning its strategies and the surrounding dynamics in the near future.

For predictive trajectory tracking, the MIT group has built a nonlinear model predictive controller that “iteratively optimizes the control action by collectively considering the reference trajectories, the nonlinear dynamic boat model, and the thruster force constraints in a finite time horizon.” A motion planner calculates “the optimal obstacle-free path for the Roboat in real time.” The latching system will let the Roboats join together as needed to create whatever infrastructure – a bridge, a concert stage – is needed to satisfy a specific use case. For all this infrastructure to be created – by self-assembly, no less – Roboat coordination will rely on a sophisticated “platform the includes communication, sensing, and control of multiple units in the Roboat network.”

Finally, the Roboat system will be capable of environmental monitoring: water and air quality, and weather conditions.

Makes me want to go over to Amsterdam and check out the Roboat system, once it’s up and running.

 

You Gotta Know When to Hold ‘Em

I’m always on the lookout for interesting and novel uses of technology – especially when it involves technology that Critical Link works with. So naturally, I was drawn to an article I saw a few weeks back on Hackaday that talked about some Cornell students who built a pokerbot on an FPGA.

The bot uses the principle of Monte Carlo simulation to calculate the probabilities of an individual winning a hand of Limit Texas Hold’em. Calculating the entire set of possible hands is impractical, so in a Monte Carlo simulation a sample is calculated instead. By accelerating these calculations on an FPGA, the pokerbot is able to calculate 300,000 possible hands in just 150 ms, and present a probability of winning to the human player. This same calculation method is then used to make decisions for the computer players in the game, too.

The students – Drew Dunne, Jacob Glueck, and Michael Solomentsev produced an excellent paper on their project. Here are a few of the high points.

  • Two of the students involved are poker players; the third isn’t. In fact, they claim that, even after they completed their work, he still doesn’t know how to play. (For whatever reason, I found this pretty funny. Then again, as an engineer, I’ve often worked on systems where I didn’t have a ton of domain knowledge of the application!)
  • Because there are so many possible game states in poker, it lends itself to Monte Carolo simulations. (Guess that gambling’s where the name Monte Carlo simulation comes from…) Rather than evaluate all the possible hands, which the students decided would be overkill, they chose to evaluate “a randomly distributed subset” of all the possible states, to “produce an indication of how strong a player’s hand is.”
  • The hardware deployed “is made up of multiple modules stitched together on the FPGA.”

There’s lots of great detail in the paper the students wrote. But the best news was the results:

  • The FPGA-based poker bot had it all over alternatives when it comes to hardware acceleration. “It runs about 4000 times faster than a (unoptimized) C++ version on the ARM Hard Processor System (HPS), and about 10 times faster than the C++ version on an Intel i7-6700HQ laptop processor.”

Of course, having a poker bot by your side to clue you in on the probability of your holding a winning hand does kind of sound like cheating. Sort of like card counting. But I’m not much of a gambler to begin with, so I’ll just end with the immortal words of Kenny Rogers, “you gotta know when to hold em, know when to fold ‘em, know when to walk away, know when to run.” Sounds like the FPGA poker bot would help you do just that.

Drone to the Rescue! Literally.

We keep hearing that, any day now, drones will be making our Amazon deliveries, dropping off that pair of sneakers, those earbuds we couldn’t live another day without. In truth, most of our consumer purchases are “nice to haves”. If a package gets caught in traffic, no big deal. But there are circumstances where time is so critical that bottlenecks and tie ups must be avoided. One of those is in the medical arena, where getting a device, or drugs, or even an organ ASAP can be a matter of life and death – and where drones can come in very handy.

Last month, the University of Maryland Medical Center took part in a demonstration of the viability of using a drone to deliver a kidney for transplant.

Researchers at the University of Maryland Medical Center, where the operation took place, said the demonstration shows the potential of unmanned aircraft systems for providing organ deliveries that, in many cases, can be “faster, safer, and more widely available than traditional transport methods.” (Source: Digital Trends)

This wasn’t an actual emergency situation, but the transplant operation was a real one. And the concept was proven.

Using a drone to deliver a pair of sneakers or earbuds is no big deal. The goods aren’t all that fragile of complex. Not so with human organs.

Last fall, well in advance of the demonstration flight:

… investigators from the University of Maryland put a kidney in a cooler and flew it on test flights underneath a DJI M600 Pro drone. To find out exactly what happened during the course of the journey, they developed a dedicated organ-monitoring wireless biosensor to measure temperature, barometric pressure, altitude, vibration, and GPS position. (Source: Digital Trends)

The kidney passed with flying colors, remaining stable during the test flight “and actually experienced fewer vibrations than when being transported in a fixed wing plane. Analysis after the flight revealed no damage had taken place either.”

Fast forward to the demonstration with a kidney headed for someone’s body. In this case, it was that of a 44 year-old woman from Baltimore. The surgery was successful, and the patient went home with her new, drone-delivered kidney up and running.

This was a short hop: the kidney only had to be transported 3 miles. That means it will work in urban situations. In the longer run, the hope is to use them for longer hauls. This will reduce or eliminate reliance on commercial flight schedules to get an organ from Point A to Point B, or on expensive private transportation. Not to mention eliminating the requirement that medical personnel accompany the organ. So drone transport of organs will make those organs more readily available, but may cut costs as well.

I’ve said it before, and I’ll say it again: one of the things that makes me most proud is being part of an industry that has such great potential for saving lives.

How technology saved – and will restore – Notre Dame Cathedral

Like most of the world, I was saddened by the devastating fire that did so much damage to Notre Dame Cathedral in Paris last month. You don’t have to have been to this magnificent building, nor do you have to be religious, to be heartened that the damage was contained and the building will be fully restored.

Once it was out of the news, I didn’t give much of a thought to Notre Dame. That is, until I saw an article in EE Times in which they asked Jon Peddie, a 3D expert, and asked him:

If you were contacted by Anne Hidalgo, mayor of Paris, and asked for a list of technologies needed to restore Notre Dame, what would be your advice to her?

Peddie’s initial answer was to ask question of his own: how closely would the restoration have to stick to the original. E.g., could they replace oak beams with fire proof titanium. Then he started his tick list of the approach he’d take and what technology he’d deploy:

  • An eight-corner laser scan (inside and out) that would be used to generate CAD drawings to compare to whatever original design drawings are available.
  • “A multi-frequency ultrasonic scan to find any weak, cracked, crumbling, or missing mortar. Those interfaces will be the weak links in reconstruction and need attention before any new construction starts.”
  • Multiple high rez photos of the stained-glass windows, using HDR stereo-depth cameras attached to drones. Glaziers would use these to restore (clean, restructure, and repair) the glass.
  • Ray-tracing techniques so that “the lighting conditions of the building before the fire can be recreated.”
  • Digitize all data that’s available on the building, from original diagrams to files pertaining to any of the earlier restoration efforts.
  • Put out the word via social media to have people send int their photos of the cathedral’s interior. Then, use AI software to “catalog and create 3D models of everything” so that everything – down to the door knobs and hinges – can be replicated.
  • Tap the 3D models that Ubisoft used for its Assassins Creed game.

That got me doing a little looking around for other articles on the technology. While looking around, I came across a piece in The Atlantic that talked about Andrew Tallon, an architectural historian (who died last November at the age of 49), who in 2010, alongside colleague Paul Blaer, did an ultra-detailed scan of Notre Dame:

 They mounted the Leica on a tripod, put up markers throughout the space, and set the machine to work. Over five days, they positioned the scanner again and again—50 times in all—to create an unmatched record of the reality of one of the world’s most awe-inspiring buildings, represented as a series of points in space.

What a treasure trove this will be for those doing the restoration.

Technology also came into play while the fire was being fought, saving the cathedral from total destruction.

As a wall of orange flames roared across the cathedral’s roof Monday, and hundreds of firefighters mounted their counterattack, high-tech machines had been brought to the fight.

Hovering in the air above the cathedral, a pair of Chinese-manufactured commercial drones equipped with HD cameras — the Mavic Pro and Matrice M210, made by DJI Technology — helped firefighters position their hoses to contain the blaze before it destroyed the cathedral’s two, iconic belfries, according to the French newspaper Le Parisien.

 “It is thanks to these drones, to this new technique absolutely unavoidable today, that we could make tactical choices to stop this fire at a time when it was potentially occupying the two belfries,” Paris firefighters spokesman Gabriel Plus said.

On the ground, Colossus, a robotic fire extinguisher, blasted the nave with water, lowering the temperature of the glass-filled room, the newspaper reported. (Source: Washington Post)

Technology helped save Notre Dame Cathedral, and technology will be used to restore it. But I don’t want to lose sight of the fact that parts of it were built over 800 years ago. Given the technology available at that time, it really is remarkable, isn’t it?

Gene Frantz on the IoT: Smart Sensors Part Two (and More)

This is a continuation of my summaries of Gene Frantz’s series on the IoT, the second post I’m devoting to his discussion of smart sensors.

Gene begins his post Warming Up To Body Heat with an interesting story about the early DSP days at Texas Instruments. (Gene was at TI for many years, and is considered the Father of the DSP.) He was working with a client that designed hearing aids, and the client was under the impression that they were using a three-volt version when, in fact, they were using a five-volt DSP – the only version TI produced at that point. Anyway, one upshot was that TI started making a three-volt DSP.  The other upshot was the movement “to make power dissipation a performance metric of the DSP,” and a goal “to continue down to lower and lower wattages”. Eventually, Frantz ended up asking TI engineers to think about creating “devices that ran on ‘body heat.’” (By the way, on this last one, the industry is getting there!)

The topic of Gene’s next post on smart sensors gets into wireless communications. He starts off by mentioning that the first wireless system was smoke signaling (used by Chinese soldiers to communicate between towers built along the Great Wall). His point? How a smart sensor communicates with the IoT aggregator will have a major impact on the power budget, so “the method of communication with the aggregator needs to have the lowest power dissipation possible,” leading to the need for smart sensor architecture that’s “driven by whether it took less energy to compress a bit of data than the energy to transmit a bit of information.”

Gene then gathers his thoughts on smart sensor communication, that includes the opinion that the standard methods currently in use aren’t the answer to IoT requirements, and that the comms methods for any specific design “will be non-standard and perhaps proprietary.” He believes that “the methods may take on a broader sense of wireless than we would first think to you.” I’m looking forward to hearing about new wireless communications approaches.

The last post in Gene’s IoT series took him into “a peek into the future as I see it.” To some exten t, the future belongs to aggregators, which he believes “will become smarter, smaller, an d lower p ow er while enveloping much of the functions of the clouds and smart sensors. (Recall that the three IoT components he writes about our aggregators, the cloud, and smart sensors.)

The bigger hurdle of the IoT future will be in how we process all of this data we are collecting to get interesting information and take the appropriate actions. Further, we will need to do it efficiently, in real-time, while not throwing more energy consumption and performance at it.

For our roadmap to the future to make the necessary big leaps, we will need to make some fundamental changes. 

One of those “big leaps” may be making the “move toward being analog rather than digital.” This from the Father of the DSP.

Talk about back to the future!

This about wraps up my series on Gene Frantz’s Embedded Computing series, other than to say that I really enjoyed reading (and writing about) Gene’s thoughts and recommend that you take a look at his full posts.

Gene Frantz on the IoT: Smart Sensors Part One

Over the past couple of months, I’ve been devoting a lot of attention (and blog real estate) to Gene Frantz’s series, Everything You Ever Wanted to Know About the Internet of Things, which ran last summer on Embedded Computing. I’ve done so for a few reasons. One, the IoT is so vast and ubiquitous, it’s essential that, as engineers, we all have a good understanding of it. And then there’s Gene’s writing style – clear and interesting. He’s a really good explainer. (I hope that readers are drilling down on my summaries and reading his full posts.) Finally, I’m a long-time admirer of Gene Frantz, the Father of Digital Signal Processing.

In my earlier Gene Frantz-related posts, I wrote about his overall view of the IoT (which he breaks down to aggregators, the cloud, and smart sensors), about aggregators, and about the cloud. Now we’re getting into the area that’s nearest and dearest to my heart: smart sensors.

In his first sensor post, Gene begins with an example of the “ultimate smart sensor,” in this case one designed for artificial vision.

Block diagram of a smart sensor

He then breaks down what he calls the P’s & Q’s. The P’s are the three components that make up a sensor: Performance (blue), Power (Green), and what he calls Personality (Red) – the purpose of the sensor. For each of the P’s, Gene lists the Q’s – the questions that need to be addressed when designing that block.  For Performance, the questions are around the process, communications, security, etc.  For Power, they’re about method, buffer, management, distribution…  For Personality, the questions are specific to the function. In the case of his example image sensor, they would include pixels, bits per pixel, b&w vs. color.

Gene’s second post on smart sensors is largely a rumination on smart dust, those tiny MEMS that get their name because they can be as tiny as grains of dust. Gene comes up with one application for them: smart pain that would let him “change the color of the walls of [his] house using a remote control.”   Interesting…(And now I’ll have to think of an app for it…)

In his next post, Gene goes back to the block diagram (shown above).

The ultimate goal of a smart sensor is that it be completely autonomous. That means it sources its own energy, performs all of its functions and communicates with the outside world wirelessly… My mental view of how this plays out is to have three independent subsystems in the smart sensor. One of the subsystems handles all of the power management assets, one handles all of the performance aspects and one subsystem handles all of the personality aspects of the smart sensor. Each of these three independent subsystems can be connected together to create the smart sensor. With this flexibility, various methods of energy management can be developed then mixed and matched with various processor systems. Finally, different personality boards with different arrays of sensors can be attached to the other two subsystems to create different smart sensors.

He then goes into some detail to provide answers to the Q’s associated with the 3 P’s (Performance, Power, Personality.)

Next up, Gene discusses the never-ending issue of performance vs. energy efficiency.

Now, in the IoT system we have a need for ultra-low powered smart sensors along with energy efficient high performance cloud computing devices….The answer is smart sensors will need just enough performance to obtain the input from the sensors, process the signal and send the results to the communications system to be transmitted to the aggregator.

Gene then asks a few provocative questions on performance-power tradeoffs, including whether we need to go back to assembly code, whether security is a luxury, should we go clock-less, and – my favorite – “is it time to go back to analog computing?”

What do you think?

Gene Frantz on the IoT: Now it’s the cloud’s turn

I recently came across series of blog posts by Gene Frantz, devoted to the IoT. Since coming across these posts, I’ve done a summary of Gene’s definition of the IoT, and a second post on his three-part discussion on aggregators, one of three IoT components he writes about (the others are the cloud and smart sensors).

For starters, Gene is pretty adamant that, when we’re talking about the cloud, we should really be referring to it in the plural, since there are any number of private and public clouds out there. But despite making his case, he falls in line and uses what is the common term: the cloud. Whatever it’s called, it’s part of the overall IoT ecosystem.

Terminology settled, he provided an overview of the different roles that IoT components take on:

…the components in the smart sensors will be designed for ultra-low power, accepting the resulting performance levels. The components for the aggregator will be higher performance, but within a power budget. Finally, the components in the cloud will be designed for maximum performance with less emphasis on power dissipation or cost. 

For the cloud, “performance is the primary priority, if not the only priority.”

If I take this concept of performance as the primary, if not the only, priority, for cloud computing, the class of devices in the cloud are high performance multi-core processing systems, GPUs, FPGAs, and specifically designed custom devices. In all of these classes of devices, their goal is to maximize performance, maximize communications bandwidth, or guarantee security and privacy.

(Gene acknowledges that in many circumstances, aggregators are also a high-performance component. And further acknowledges that his focus on the microprocessors that are cloud sub-components is because he comes from the world of DSPs. In fact, Gene Frantz is considered the father of the DSP.)

Gene concludes the section of the IoT series devoted to the cloud to a rather philosophical exploration of the use of the word “infinite” in the context of IoT, grappling with the often-used notion that the cloud offers infinite performance, infinite bandwidth, and total security, and wondering whether “infinite” overstates the case. He lands on his personal definition of infinite: practically speaking, Gene accepts that “infinite” translates into “just a bit more” performance and bandwidth than he needs, and “security [that keeps] ahead of those trying to take my stuff.”

My next report out on Gene’s series will be devoted to something near and dear to my heart: smart sensors.