Skip to main content

What Amazes the Amazing Mr. Gates

MIT’s Technology Review recently asked Bill Gates what new technologies he feels will be changing the world for the better. Here’s what he has his eye on.

Robots will be getting more dexterous and smarter:  To date, robots have proven pretty adept at completely repetitive tasks. That’s why they’ve done so well in terms of replacing humans on assembly lines. But if any change is introduced, they tend not to fare so well. Now, thanks to AI techniques like  reinforcement learning, robot dexterity and flexibility are improving. In 3-5 years, they should be better able to deal with the “messiness of reality.” (They’re not mentioned in the article, but my favorite robot watching is looking at the robot dog videos from Boston Dynamics, especially the one where the dog slips on a banana peel. Google it!)

New-wave nuclear power:  Historically, cost and safety have gotten in the way of acceptance of nuclear power. Among the new designs that are coming on board are:

…generation IV fission reactors, an evolution of traditional designs; small modular reactors; and fusion reactors, a technology that has seemed eternally just out of reach.

It’s always good news to have breakthroughs in technologies that are alternative to the use of fossil fuels as a source of power.

The ability to predict premature babies: My kids all went full term, but few things strike more terror into a parent’s heart than that thought of having a preemie. One in 10 babies are born prematurely, and “it’s the leading cause of death for children under age five.” Now, within a few years, there’ll be a simple, non-invasive blood test that can alert a physician if a pregnant mother is apt to deliver prematurely. The doctor can then take steps to ward off an early birth. And that’s good news.

Better screening for gut diseases: Environmental enteric dysfunction (EED) is fairly common in poor countries, and when it comes to malnourishment, is often the culprit. But endoscopic testing is expensive and difficult to conduct under less than optimal conditions. Now a scientist has come up with “swallowable capsules [that] contain miniature microscopes.” These microscopes send images to a console, helping a physician make a diagnosis and determine treatment. In addition to EED, the device can be used to test for a precursor for esophageal cancer.

Custom cancer vaccines: Another breakthrough that Bill Gates likes is custom cancer vaccines that “trigger a person’s immune system to identify a tumor by its unique mutations.” The hope is that these vaccines will work better than shotgun chemo approaches in terms of destroying tumor cells, but they’ll do so without causing as much damage to healthy cells as chemo does.

Plant-based (and lab-grown) burgers: You may remember “hold the pickle, hold the lettuce.” Now it’s becoming “hold the cow-based burger.” Anyway, the good news is that the world is getting richer, and more folks can afford meat. The bad news is that to produce a pound of animal-based protein uses a lot more water, land, and fossil fuel than does the production of a pound of plant protein. So it will be better all round for the environment if we’re consuming more plant-based or lab-grown meat. (A friend recently had one of the plant-based protein burgers and said it was a bit well-done – the only way it was offered in the restaurant – but basically okay.)

Capturing carbon dioxide:  A number of companies are working on “practical and affordable ways to capture carbon dioxide from the air [that] can soak up excess greenhouse-gase emissions.” A capture approach that was recently discovered should be able to bring the cost per ton of capture down by an order of magnitude. Still need to figure out what to do with it once it’s captured. Folks are working on that, too.

ECG’s from a wearable device: This capability is already available on an Apple Watch and other devices, and deliver nearly as accurate a result as the one you’d get from a medical device that requires a trip to a clinic.

Toilets that don’t require sewers:  One thing that those of us who live in more modern, wealthier countries don’t have to worry about is sanitation. But that’s not true for much of the world. Providing better sanitation has been one of Bill Gates’ aims for nearly a decade. In 2011, he set up the “Reinvent the Toilet Challenge.” A number of companies  responded to the challenge of perfecting energy-efficient toilets that don’t need a sewer system, and which can perform on-the-spot waste treatment. Now the challenge is to produce them at scale, a challenge that should be met in the next couple of years.

AI assistants get more conversational – and more capable: Just as robots are getting more adept, so too are AI assistants like Alexa and Siri. And pretty soon, they’ll be good for something more than telling us who won the Best Actress Oscar and playing the song we requested. Again, it’s AI that’s improving, in this case due to natural language processing. Some of the improvements are already available. Google Duplex:

… can pick up your calls to screen for spammers and telemarketers. It can also make calls for you to schedule restaurant reservations or salon appointments…In China, consumers are getting used to Alibaba’s AliMe, which coordinates package deliveries over the phone and haggles about the price of goods over chat.

While AI assistants can do more than they used to (haggling! Alright), “they still can’t understand a sentence. Lines are scripted or generated statistically, reflecting how hard it is to imbue machines with true language understanding.”

But it’s going to happen…

Anyway, thanks to the amazing Mr. Gates for giving us a view into the technology he thinks is most amazingly worthwhile.

Questions about the IoT? Here’s Gene Frantz on Aggregators

So much of today’s technology interest and invention is devoted to the Internet of Things. I recently discovered a good series of posts on the subject by Gene Frantz that ran on Embedded Computing last year, and a few weeks back, I devoted a post to summarizing Gene’s definition of the IoT. At the highest level, Gene sees the IoT as a “system of systems” composed of clouds, aggregators, and smart sensors. He finds aggregators the most interesting IoT system, and here’s my take on his take:

While it may seem at first that, if you have the cloud and smart sensors, you don’t need aggregators there occupying the middle. But Gene argues that a system occupying the middle ground is important.  Here’s how he views the role of the aggregator:

  • Communicates with the cloud using standard communication methods. 
  • Communicates with smart sensors with proprietary communications where the need for long battery life and cost exceed the need for standards
  • It has enough processing performance to service multiple smart sensors.  In doing so it manages the raw data from the multitude of smart sensors, digests their data, prepares a set of information ready to transmit to the cloud and then transmits the data to the cloud.
  • In some cases, the aggregator has enough autonomy to act as the cloud for the system it is in.

In keeping with its role in the middle, when compared to the cloud and to smart sensors, aggregators have medium power dissipation, medium cost, medium performance, and medium size. Another aspect of an aggregator: multiple communications options.

In his next post on aggregators, Gene asks what does an aggregator looks like. So we will, too.

An aggregator is made up of:

  • A computer system
  • A power management system
  • A memory system
  • Communication systems

Just as an aggregator provides multiple communications options, it has multiple design possibilities: PCB, SOM, SiP, or SoC, with the choice “dependent on size and flexibility constraints.” Here’s the block diagram he provides for a typical aggregator:

Gene then describes what I would characterize as something of a tear-down of a PCB-based aggregator. Not surprising, given Gene’s long career with TI, the aggregator he uses to illustrate his point use TI processors (ARM A8 – a technology that we have deployed at Critical Link in our SOMs).

In his third post on aggregators, Gene explores how aggregators communicate. But first, he talks about shrinking systems, and “lowering power dissipation in digital signal processor devices.” (As an aside, Gene Frantz is considered the father of the DSP!)

… the aggregator will need to have an ultra-low power communication link with the many smart sensors (perhaps thousands) it is assigned to communicate and aggregate the data it receives into information. It is the information it will send on to the cloud for the final actions. The communications with the cloud will be done with an industry standard method. The communications with the smart sensors will most likely be proprietary rather than a standard in order to guarantee minimal power dissipation at its needed communication rate.

Next up in our series on Gene Frantz’s IoT series: the cloud.

Everything You Ever Wanted to Know about the IoT? Just ask Gene Frantz.

Critical Link pretty much cut its product development teeth on DSPs. Although we have branched out since then, we still do plenty of work with DSPs. And we still remain grateful to (and admiring of) Gene Frantz who, while at TI, pretty much invented Digital Signal Processing. Gene retired a while back, but he’s not exactly kicking back in his retirement. Among other things he’s worked on is an excellent set of posts on the Internet of Things which ran last spring/summer over on Embedded Computing.  Don’t know how I missed these the first time around, but I’ll be making up for lost time by summarizing some of Gene’s post, starting with his definition of the IoT:

It is a system that consists of three different groups of sub-system components:

  • Clouds
  • Aggregators
  • Smart sensors

Each of the above sub-system components is necessary for the overall IoT system to function optimally. The cloud is the ultimate computing unit and universal communications network. The smart sensors are the interface to the real world. Finally, the aggregators are the go-betweens. To the cloud the aggregator looks like smart sensors and to the smart sensors it looks like the cloud.

In his next piece, Gene drills down further on this definition, starting with his characterization of the IoT as “a system of systems”, and his view that “any of the two IoT components functioning together without the third can be a complete IoT system.” He gives a couple of examples of how the components in an IoT “system of systems” work together, then points out that each of the different IoT components – smart sensors, aggregators, the cloud – have distinctive considerations when it comes to performance, cost, power dissipation and size:

  • The cloud system focuses on performance. Therefore, cost, power dissipation and size are at best secondary concerns.
  • The aggregator system is less concerned on performance and power dissipation and more on flexibility.
  • The smart sensor system focuses on battery life, size and cost, making performance a distant third in priority.

By the way, the sample applications Gene uses are interesting ones. I don’t know if the coffee shop with a smart “infinite cup of coffee” exists in real life or just in Gene Frantz’s head, but I’m guessing that at some point in the future there’ll be one on every corner.

In his third definition piece, Gene asks the question about the number of systems that make up the IoT. If you’re looking at the overall IoT, there are an almost Carl Sagan-esque number of devices that are part of the IoT: an estimated 8+ billion in 2017, thus a lot more today. For the most part, in this post, Gene addresses the complexity of arriving at an exactly precise count (or an exactly accurate ) for the Internet of Things. One thing that contributes to this complexity is sometimes an “aggregator looks more like a smart sensor than an aggregator.” Or the “aggregator may look like the cloud.” Then there’s the fact that “a network of smart sensors [may] act both as a smart sensor and at the same time as an aggregator.”

Let’s just stick with big and complex.

I’ll be drawing on other posts in this series, but I do encourage you to read Gene Frantz’s work here, which is clear, straightforward and interestingly presented. Meanwhile, thanks again for DSP. Critical Link wouldn’t be where we

What I didn’t see at CES 2019 (but, fortunately, other folks did see)

The tech year always kicks off with the massive Consumer Electronics Show (CES), held each January in Las Vegas. We do a number of shows – Photonics West, coming up next week! – but CES isn’t one of them. After all, Critical Link tends not to play in the consumer space. That said, we all have a lot of interest in what’s going on in that space. For one thing, we’re techies so we’re naturally drawn to innovative cool new applications and gadgets. For another thing, there’s a lot of overlap and back-and-forth among the technologies used in the consumer world and technologies used in the industrial, scientific, defense, etc. worlds we work in.

Since I wasn’t at the show to see for myself, I have to rely on what the observers on the ground found compelling.

Ars Technica put together a Best in Show list, focusing on the things they view as most interesting. Thus, their list ignores the near-ubiquitous offerings that had anything to do with Google Assistant and/or Amazon Alexa, which they see as a way away from widespread consumer adoption.  Instead, they focused on products “that will make someone’s life a little better (or just a little cooler) right here in 2019.” They admittedly missed some innovative products this way, but they did make some interesting here and now picks.

Surprisingly (to me, anyway) one of those picks is the Dell XPS 13, which they characterize as “a nearly perfect laptop.” They have long favored the XPS 13, but now that Dell has fixed their webcam problem, moving it from the bottom of the screen (?) to the top of the screen where it belongs, they really like it. They also liked Dell’s Alienware Area 51M, “a gaming desktop in a laptop form factor.” Albeit a laptop form factor that’s on the large size.

Despite their disavowal, one Google Assistant product made the cut: the Lenovo Smart Clock.

You can use it as an alarm clock, of course—it supports Google’s gentle wakeup routine… But you can also use it to track calendar events, ask questions, control various gadgets in your smart home (a perfect fit for the start of the day), and listen to music.

Making it very useful to those who are after a digital assistant.

Top marks went to LG’s OLED TV lineup, with their emissive displays – a pricey “future-proof TV with all the modern bells and whistles and stellar image quality.” (Watch out, LG. Ars Technica thinks that Samsung’s MicroLED will be on the best-of list in another year or two.) Samsung doesn’t have to wait to get on the Ars Technica CES 2019 list, however. They made it with the Samsung Space monitor, for those looking for a larger (32- or 27-inch) monitor.

The HTC Vive Pro Eye is “making VR more natural”.

The Pro Eye features built-in eye tracking…Modern video games use a technique called “frustum culling” to render only what is in front of the game’s virtual camera—not rendering stuff outside your field of view allows the graphics hardware to focus entirely on making just what you can see as realistic or attractive as possible, which allows more detailed virtual worlds on lower-end hardware.

Eye-tracking enables an even more sophisticated technique than that called “foveated rendering,” which reduces image rendering quality in your peripheral vision. While you do see and are aware of objects in your peripheral vision, the quality of your perception in those regions is lowered anyway. By employing this technique, game and VR experience developers can further improve the rendering of what’s right in front of you without much downside.

So that’s what Ars Technica liked at CES 2019. For another look, here’s a link to CNET’s: Everything We Learned about the Future of Tech. In their wrap-up, they discuss 5G (all talk, no products); smart home (Google and Amazon); 8K TV’s (including a 98-incher coming from Sony); health tech; robots; beauty tech (huh?); and car tech (I’m in!). They also see the return of the chip wars (Nvidia vs. AMD vs. Intel…).

Anyway, if you couldn’t be at CES, it’s nice to be able to make a vicarious visit.

Gazing into the crystal ball for 2019

It’s always good to start off the new year by looking into the crystal ball and trying to figure out what’s going to happen moving forward. Here, I’m looking into the crystal ball via a report that the folks at ARM have recently published. Their 2019 Technology Survey & Predictions: Yes to More Tech, Yes to AI, Yes to Body Odor Detection is interesting, even if I for one believe there’s a reasonably viable, long-proven approach to body odor detection that does not require technology.

Here are a few quick takeaways.

For starters, their survey showed that, globally, 66% of participants reported that technology had become more important to their lives during 2018, while 31% said the importance had remained unchanged during the year. This isn’t surprising. What is surprising to me is that 3% of people noted that technology had become less a part of their lives. Hmmmm.

The survey showed that the vast majority of respondents had smartphones, and ARM made some smartphone-related prognostications. One thing they’re looking for is smarter navigation, predicting that by 2022 augmented reality will be used in mobile nav apps, and that “within three years, we expect Simultaneous Localization and Mapping (SLAM) to be widely-used in precision (down to <1cm accuracy) location-based service indoors, particularly by retailers looking to guide shoppers to specific goods.”

They’re also predicting that gaming will increasingly be moving to smartphones, with gaming on tablets reaching the point of obsolescence by 2025. To me, this is more about the convergence of tablets and smartphones, as smartphones grow in size. The smartphone gaming (and movie-watching) experience will be enhanced by 5G technology, which will enable no-latency downloads by 2020.

Smartphones will be getting smarter, with a prediction that during 2019, “the average number of global monthly users of AI-based mobile apps…will double to 2 billion people.” And speaking of getting smarter, ARM foresees that 2019 will be the year in which the intelligent home will really start to take off, especially “in areas such as home lighting, irrigation and heating/cooling.” I don’t really need to have a refrigerator figuring out what aisle I’m in at Wegman’s and telling me what to buy, but apps like lighting, heating/cooling, and home security work for me!

While our homes are getting more intelligent, so are commercial buildings.

Energy efficiencies from optimized HVAC and efficient lighting are fast becoming table stakes for all new buildings so smarter owners will look towards space optimization, object detection for safety/security, way finding and asset tracking as a way of making buildings work better for users.

Cities are getting in on the get smart act as well.

The rise of Machine Learning (ML) and Computer Vision (CV) will mean smart city guardians will look beyond cost reduction (e.g., intelligent LED street lighting) to citizen engagement and stronger revenue flows from areas such as red-light violation detection, Wi-Fi-hotspot, 5G services, smart towers, crime detection/analysis and information broadcast.

Regular Critical Link blog readers will know that I am very interested in car-related technology, so I got a kick out of ARM’s saying that “Mph may be giving way to MHz”, as their survey found that car buyers are increasingly interested in the tech features of their new vehicles. I’m not advocating for it, but as cities get smarter, there’ll no doubt be a rise in the use of technology to avoid red-light violations. Just sayin’.

As for the use of technology for body odor detection, ARM sees this as the type of odd-ball AI application we’re likely to see in 2019, when they predict there’ll be “new flexible plastic computer chips embedded in clothes to detect body odor levels.”

Overall, 92% of those surveyed anticipate that the use of AI will become more widespread in 2019, and ARM sees more and more devices running AI algorithms and using machine learning processes to learn user patterns over time. Despite all the technology adoption, concerns remain about security and privacy.

As always with technology, exciting times ahead!

Image source: ARC Advisory Group

Looking for breakthroughs in the fight against neurodegenerative disorders

There was a recent article in the EE Times on some research funding that’s been awarded to a team Belgian researchers who are developing “a new chip to study the mechanisms of Parkinson’s disease.” The funding comes from the Chan Zuckerberg Initiative, which runs – among other efforts – the Neurodegeneration Challenge Network, a program focused on disorders like Parknson’s, Huntington’s, Alzheimer’s and ALS . (Whatever you think of Facebook’s Mark Zuckerberg, he and his wife are very philanthropic.)

There are, of course, other technologically-based research initiatives aimed at solving the riddle of Parkinson’s.

Last spring, the NIH announced a study that had examined the feasibility of deploying a self-tuning (adaptive) deep brain stimulation to alleviate Parkinson’s symptoms.

The device differs from traditional ones in that it can both monitor and modulate brain activity. In this work, sensing was done from an electrode implanted over the primary motor cortex, a part of the brain critical for normal movement. Signals from this electrode are then fed into a computer program embedded in the device, which determines whether to stimulate the brain. For this study the researchers taught the program to recognize a pattern of brain activity associated with dyskinesia, or uncontrolled movements that are a side effect of deep brain stimulation in Parkinson’s disease, as a guide to tailor stimulation. Stimulation was reduced when it identified dyskinesia-related brain activity and increased when brain sensing indicated no dyskinesia to minimize deep brain stimulation-related side effects.

This was a small-sample, short-term study, but the results look promising. Once again, embedded technology used for medical purposes.

Then there’s Patient-on-a-Chip, “a new personalized medical strategy that can replicate human biological systems in a small chip [which] may help predict patients’ response to certain treatments based on their genetic makeup.” This one comes from Cedars-Sinai Medical Center and Emulate, a bio-tech outfit.

To simulate the complex environment inside the human body, each chip has small channels lined with thousands of living human cells. The chip can receive air and fluids, such as blood, to create a microenvironment that mimics that of the human body.

Using stem cell expertise, researchers can take patients’ blood or skin cells and make any organ cell, such as those of a lung, liver or intestine. Most importantly, these cells retain the unique genetic makeup of the patient…By exposing patients’ cells in Organ-Chips to certain therapies, clinicians can gain more accurate information about how a patient would respond and could then tailor a treatment plan to that individual. (Source: Parkinson’s News Today)

MIT News reports that some of their scientists are testing new drugs with “ALS-on-a-chip” in hopes of finding a cure.

In an advance that could help scientists develop and test new drugs, MIT engineers have designed a microfluidic chip in which they produced the first 3-D human tissue model of the interface between motor neurons and muscle fibers. The researchers used cells from either healthy subjects or ALS patients to generate the neurons in the model, allowing them to test the effectiveness of potential drugs….

The neurons are engineered so that the researchers can control their activity with light, using a technique called optogenetics. The muscle fibers are wrapped around two flexible pillars, so when the neurons are activated by light, the researchers can measure how much the muscle fibers contract by measuring the displacement of the pillars.  (Source: MIT News)

The model is then used to test drugs to see how they restore muscle strength.

Last fall, I lost my father to ALS, so this is personal to me. Finding a cure for this terrible disease, which is so very devastating for those who suffer from it and their families, is one of my greatest hopes for this new year.

I’ve said it before, but I’ll say it again: work like this makes me incredibly proud to be part of the technology industry. I often hear folks criticizing “tech”, or making fun of it because of Facebook or some other application they dislike or consider frivolous. But technology is also responsible for doing so much good for society, and the use of technology for medical breakthroughs is a prime example of this.

A Semi-Autonomous Race Car? Yes, it pretty much exists!

Anyone who knows me well knows that I’m a car guy and an electronics guy. So I’ve developed a keen interest in autonomous vehicles, and have posted on them a number of times over the past few years.

I was especially struck by an article by Brian Santo that appeared a few weeks ago in EE Times. In the article, Brian detailed Arrow Electronics’ efforts to create a semi-autonomous race car that could be driven by a quadriplegic.

Semi-autonomous vehicles are, of course, where things now as we move further up the road to the world of fully autonomous vehicles. Semi-autonomous vehicles these days do a lot more than the “no-hands” parallel parking technology that was the thing a wh

ile back. Today, there are cars on the market that can operate in fully autonomous mode along planned routes, or in low-speed, stop-and-go traffic situations.

But the car that Arrow’s SAM (semi-autonomous motorcar) design team worked on wasn’t interested in low-speed situations. We’re talking race car here, and the story is a fascinating one. (It’s also fascinating to see the car in person. We’ve had the opportunity to do so at a couple of Arrow events – we’re an Arrow partner – and I even got to see it on the track at a pre-race event at Watkins Glen in NY. Quite amazing!)

Sam Schmidt was an Indy-level race car driver who, in 2000, suffered a life-altering crash that turned him into a quadriplegic. Sam’s accident stopped him from driving, but it didn’t stop him from founding Schmidt Peterson Motorsports, an auto-racing team, and from becoming deeply involved in efforts to find a cure for paralysis. And then Sam got together with SAM on their challenge to create a race car that could be driven by someone with “a profoundly limited range of motion.” Safety was of paramount concern:

It’s not as if user safety isn’t something that engineers don’t deal with all the time, whether they’re designing controls for an elevator or the life support systems in a jet or a smartphone that ideally should not blow up in users’ hands. The difference with designing any kind of system for auto racing, however, is that recklessness is baked into the endeavor — it is, to use techie terminology, a feature, not a bug — and that’s when the competitors are able-bodied.

The article, which is definitely worth a read whether you’re a car person, an electronics person, or – like me – both, talks at length about the technology that went into the Human Machine Interface, the actuators (steering, braking, gas pedal), and the GPS-based path guidance system that would let Sam Schmidt drive the car. Lots of sensors and camera involvement, which I like a lot.

The upshot was that Sam Schmidt has gotten back behind the wheel, and on the Indy track in 2016 revved his Corvette up to 150 m.p.h. That year, he also participated in an actual time-trial race up Pike’s Peak, where he had a middling finish – but a finish that placed him ahead of plenty of able-bodied drivers. Since then, he’s driven the SAM card at speeds up to 190 m.p.h.

The SAM-Sam car remains a work in process, and new features – new technology – continues to be added.

I loved this story because it’s at the intersection of two of my interests. But I also love it because it demonstrates the life-changing power that the technology that we work on day in day out has. I don’t imagine there’ll be a lot of quadriplegic race car drivers out there, but the work that SAM and Sam are doing will help a lot of people out before they’re done.

Tech Toy shopping? Here are a few ideas.

This time of year, I like to take a look at the best tech toys out there.  I rely on compilations that have been put together by someone else and this year’s someone else is Kelly Hodgkins, who put together a top 20 list over on Digital Trends.  I won’t be covering all 20 – mostly (but not always) avoiding the really pricey ones – like the $280 robot kit aimed at 5 years old – but mainly picking out the ones that I found a good combination of interesting and affordable. So here goes.

For kids 8 years and up, Kelly’s suggestions are heavy on the robot kits. I like the idea of giving kits rather than finished products, as there’s no better way for a kid to learn how something works than to build it for themselves. (I suppose that completely building something from scratch would be even better for learning, but you know what I mean.) The most reasonably priced robot kit on the list is the Makeblock mBot Smart Robot Kit ($85), which is touted as making “it easy to introduce STEM to elementary school kids.” I’m all in favor of that.

Higher up the price ladder, the Meccano Meccanoid G15 KS Personal Robot costs $195. This is more investment-level, and – with 1200 parts – it’s definitely for a kid who has a demonstrated interest in building their own personal robot and the patience to do so. Maybe for the kid who really enjoyed a starter kit last year?

Maybe because it reminds me of the lower tech “science-y” toys I grew up with, I’m quite partial to the Circuit Maze Board Game. It’s only $30, and “teaches your kids all about how circuits and electricity work.” And “if you’re trying to cut down on screen time, this is a nice alternative.” Sometimes I think that I should cut down on my screen time, so perhaps I should get one of these for myself!

For those 6-7 years and up, I like the SAM Labs Science Museum Inventor Kit for $100. Again, this is somewhat costly so probably wouldn’t be good for a child with only a casual interest. But for a kid who’s really curious, it comes with a “light sensor, a buzzer, a tilt sensor, a motor and a few other bits and pieces” and “includes guides that encapsulate everything from Morse code to alarm systems.” Sounds great for a budding EE.

As mentioned, I like kits, so I was drawn toward the Kano Computer Kit for $130 which gets your child to build their own computer and inspires them to learn programming as well.

On the other hand, I don’t really like the SelfieMic Selfie Stick Microphone. At $27, it’s affordable enough, but who really wants to promote self-centeredness with a toy that combines a selfie stick and a microphone. Just say no!

For younger kids – 2 or 3 to 5 years old and up – there’s that aforementioned expensive robot. But there’s also the Vtech Touch and Learn Activity Desk or $45, which is good for developing an interest in technology while also helping with motor skills. at delve into different subjects.

I don’t know quite what to make of the Play-Doh Touch Shape to Life Studio ($30). Play-Doh is pretty much the ultimate no-tech toy, but it’s wonderful at promoting creativity (and helping the littlest kids learn about colors). This kit lets you scan your Play-Doh creations into your iPad where “they come to life as characters or objects in a colorful, side-scrolling world.”

My favorite tech toy for the littlest guys has to be the Fisher Price Code-a-Pillar. “This robotic caterpillar consists of different sections, ones your child can stick together. Each segment does something different, so by sticking them together in a specific order, your child is creating sequences that form the basis of coding.” Plus it’s cute and “toy like” enough to make it attractive and interesting for really young kids. Not to mention that it’s got a great name.

In our technically-driven world, it’s important to expose kids not just to technology – that’s going to happen whether you like it or not – but to help them gain an understanding about how it all works.

Happy shopping!

Gartner’s Latest Emerging Technologies Hype Cycle

Gartner has an interesting way of looking at emerging technologies. In their Hype Cycle, they place new technologies along a somewhat amusing continuum that extends from the inception of a technology breakthrough – the Innovation Trigger – to the Peak of Inflated Expectations (this is the hype in Hype Cycle), through the Trough of Disillusionment and on to the point where the hype is at least partially realized (the Plateau of Productivity).

Although we don’t tend to get caught up in hype, we’re always interested in looking at emerging technologies and in understanding how their perceived in the market. After all, we need to make sure that we’re prepared for whatever’s coming – even if it isn’t coming overnight.

Here’s a look at their most recent Hype Cycle for Emerging Tech.

Not surprisingly, the technologies that are closest to reaching the point where they’re actually fully viable, broadly adopted, and having an impact on productivity are the ones that are most familiar to us: 5G and Deep Neural Network ASICs, Deep Neural Nets (Deep Learning) and Virtual Assistants. Even those technologies that are 5-10 years out from primetime tend to be household words – Smart Robots,  Silicon Anode Batteries, Blockchain – at least among techies. Connected Home, seen by Gartner as 5-10 years in the future, appears to be in the downward slide toward the Trough of Disillusionment. Personally, I don’t see the Connected Home being that far in the future. Nor do I see any widespread disillusionment. Sure, I think we all make a little fun of the refrigerator that calls you when you’re at the grocery store and tells you to bring milk home. But both personally and professionally, I see a lot more adoption of Connected Home devices – Nest thermostats, Ring security doorbells. Plus Gartner has Virtual Assistants as 2-5 years. I’ll concede that a lot of what those Virtual Assistants are being used for is answering questions like “who won the Superbowl in 2004?” and requests to play the Macarena. But they’re also being used to adjust the Nest and otherwise connect the Connected Home.

One technology is already in the dreaded Trough – Augmented Reality – which I found pretty interesting. Don’t know whether it’s because it’s not augmented enough, or not real enough yet.

Also interesting: the Digital Twin concept (5-10 years out). This is the merging of the physical and virtual worlds, in which every industrial product will have a digital representation. Intriguing!

Of the technologies that are farther out, the one I’m most looking forward to – and, in some respects, most dreading – the Autonomous Flying Vehicle. Somewhere in my early childhood, I “read” a book about flying sandbox. I know that a flying sandbox isn’t quite the same as an Autonomous Flying Vehicle, but I do like the idea of doing low-level flying in a personal vehicle. I just want to make sure they’ve shaken out all the problems that autonomous land-based vehicles are grappling with, let alone the ones that are about to take flight. Looks like I have a ways to wait.

In the meantime, Happy Thanksgiving from the Critical Link family to yours.

The source for the Gartner graphic can be found here.

Be On the Lookout for Your Very Own Cobot

A couple of weeks ago, Critical Link traveled to Santa Clara, California, to participate in the Collaborative Robots, Advanced Vision, and AI Conference. These three technologies are very much in the news – buzz words, even –  and each is likely to spark disruptive technology breakthroughs in the coming years. We were there primarily for the Advanced Vision end of things, showing off some of our latest embedded vision designs, which are used in a wide range of industrial and medical applications.

But I wanted to take advantage of the opportunity to learn more about other technology areas, and in particular was drawn to  collaborative robots (cobotics).

Having attended this event, my understanding of cobotics is vastly different from the understanding I came in with. Originally, I was thinking of collaborative robots as machines that work together for purposes of process automation, Industrial IoT applications, etc. My understanding was, not surprisingly, driven by our experience. Here at Critical Link, we’re seeing more and more interest in our SOMs and embedded imaging solutions in these areas, and are really impressed with the ideas and algorithms our customers are coming up with.

But at the conference, the term cobots was clearly meant to describe robots working in conjunction with humans. I’m sure this will take many forms as time and technology move forward, but at the moment the focus seems to be on robotics arms that perform a specific function, with a human alongside. These arms are for use in factory settings, but are no longer behind enclosures as is typical with traditional industrial robots. The list of safety considerations that have to be taken into account to make this possible is lengthy, but necessary because the list of ways an industrial-strength robotics arm can injure a human is even longer.

Not surprisingly, both traditional and collaborative robotics are expected to see strong unit growth over the coming years.

That said, it’s not smooth sailing for everyone in cobotics, and sometimes even pioneer companies like Rethink Robotics out of Boston aren’t able to make it work. Last month, they announced that they were shutting down.  Rethink’s problem was not so much technical, but rather a combination of a couple of deals that didn’t work out, and a truly formidable competitor.

The bottom line: the outlook does look positive for both the traditional industrial cobots where our vision systems factor in, and the newer side-by-side cobots.