Skip to main content

Making AI More Energy Efficient

Whatever your feelings are about AI – It’s all great! It’s going to kill us all! You gotta take the bad with the good! It all depends! I’m not quite sure – yet! – most of us recognize that while AI is growing as a force, and will revolutionize entire industries, it does consume an awful lot of energy, leading to concerns about whether it is environmentally sustainable.

Consider these observations, based on the work of the International Energy Agency (IEA)

One of the areas with the fastest-growing demand for energy is the form of machine learning called generative AI, which requires a lot of energy for training and a lot of energy for producing answers to queries. Training a large language model like OpenAI’s GPT-3, for example, uses nearly 1,300 megawatt-hours (MWh) of electricity, the annual consumption of about 130 US homes. According to the IEA, a single Google search takes 0.3 watt-hours of electricity, while a ChatGPT request takes 2.9 watt-hours. (An incandescent light bulb draws an average of 60 watt-hours of juice.) If ChatGPT were integrated into the 9 billion searches done each day, the IEA says, the electricity demand would increase by 10 terawatt-hours a year — the amount consumed by about 1.5 million European Union residents. (Source: Vox)

Given stats like this, and the fact that demand for AI – and the power it consumes along the way – is rapidly growing, it’s no wonder that engineers and researchers are focusing on ways tamp down/slow down AI’s energy demands.

Writing recently in the EE Times, Simran Khoka notes that “data centers, central to AI computations, currently consume about 1% of global electricity—a figure that could rise to 3% to 8% in the next few decades if present trends persist,” adding that there are other environmental impacts that AI brings with it, such as e-waste and the water usage required for data center cooling. She notes that IBM is one of the leaders in creating analog chips for AI apps. These chips deploy phase-change memory (PCM) technology.

PCM technology alters the material phase between crystalline and amorphous states, enabling high-density storage and swift access times—qualities essential for efficient AI data processing. In IBM’s design, PCM is employed to emulate synaptic weights in artificial neural networks, thus facilitating energy-efficient learning and inference processes.

IBM is not alone. Khoka cites a couple of the little guys: Mythic, which:

…has engineered analog AI processors that amalgamate memory and computation. This integration allows AI tasks to be executed directly within memory, minimizing data movement and enhancing energy efficiency.

She also writes about Rain Neuromorphic which is developing chips “process signals continuously and perform neuronal computations, making them ideal for creating scalable and adaptable AI systems that learn and respond in real time.”

Applications well suited to analog chips include edge computing, neuromorphic computing, and AI inference and training.

A principal challenge that switching to analog chips presents is ensuring that they have the same precision and accuracy that digital chips yield. Another hurdle is that, at present, the infrastructure behind AI systems is digital.

It’s no surprise that MIT is keeping its eye on ways to reduce the energy consumption of voracious AI models. MIT’s Lincoln Lab Supercomputing Cener (LLSC) is finding that by capping power and slightly increasing task time, energy consumption of GPUs can be substantially reduced. The trade-off: tasks may take 3 percent longer, but energy consumption is lowered by 12-15 percent. With power-capping constraints in place, the Lincoln Lab supercomputers are also running a lot cooler, decreasing demand placed on cooling systems – and keeping hardware in service longer. (And something as simple as running jobs at night, when it’s cooler, or in the winter, can greatly reduce cooling needs.)

LLSC is also looking at ways to improve how efficiently AI models are trained and used.

When training models, AI developers often focus on improving accuracy, and they build upon previous models as a starting point. To achieve the desired output, they have to figure out what parameters to use, and getting it right can take testing thousands of configurations. This process, called hyperparameter optimization, is one area LLSC researchers have found ripe for cutting down energy waste.

“We’ve developed a model that basically looks at the rate at which a given configuration is learning,” [LLSC senior staff member Vijay] Gadepally says. Given that rate, their model predicts the likely performance. Underperforming models are stopped early. “We can give you a very accurate estimate early on that the best model will be in this top 10 of 100 models running,” he says.

Jettisoning models that are slow learners has resulted in a whopping “80 percent reduction in energy used for model training.”

Whatever your feelings about AI, it’s comforting to know that there are plenty of folks out there trying to ensure that AI’s power consumption will be held in check.

 


Image sourc: Emerj Insights

Batteries coming to major home appliances? Maybe…

When you think about home appliances that run on batteries, what’s the first thing that comes to mind? Flashlights? Remote control? Maybe a cordless drill? A weedwacker? Some sort of emergency radio?

Anyway, the home appliances that come to mind are small ones, where portability is key. (Some of you may be old enough to remember remote controls that were tethered to the TV, but a plug-in flashlight? Inconceivable!)

But appliance-by-battery-operation may be  moving beyond the realm of small appliances and into the domain of heavy appliances like stoves, refrigerators, and washing machines. In April, the Wall Street Journal weighed in:

“Imagine that your major home appliances run on batteries. Using regular cords that plug into standard 120-volt outlets, the appliances charge themselves when power is cheapest and store enough energy so that during a blackout, you can still cook, do laundry and keep your refrigerator going, while also recharging your smartphone and ensuring your Wi-Fi router never loses connectivity.” (WSJ)

Then I saw a more recent article by Bill Schweber in EE Times which wondered whether battery-operated “full-sized, fixed-in-place units” were becoming a thing. Schweber focused on induction-based cooking stoves, which are pretty much the first major appliance that can come with batteries. He cited a couple of companies that will be shipping battery-run induction ranges this year. They’re pretty pricey – higher than similar non-battery ranges – but there are tax credits and rebates that bring the battery vs. non-battery options into near cost parity.

But the battery-ization of appliances won’t stop with induction ranges. If you’ve got a heavy duty battery in house, it could be used to power other appliances, or back them up, which will introduce another element of household complexity.

The battery-operated/-backed appliance is an interesting premise but with lots of ripple-effect implications. Batteries of this capacity need careful management and attention, starting with their own sophisticated battery management system (BMS). This will likely need to be integrated into a smart-home controller as well as the owner’s home network and smartphone. Larger battery setups also make physical demands for access, a strong floor and code-approved cabling; there are maintenance, repair and related spare parts issues as well.

And then there’s the issue of what powers the batteries, which need to be kept charged (and, eventually, replaced). While the move towards better living through batteries seems attractive in that it helps us as a society become less reliant on fossil fuels, the power needs to come from somewhere – hopefully from a source like solar or wind, and not from a coal-fired power plant.

Schweber points out a couple of flaws in jumping into an all-battery world. One is the “availability of rare Earth and other materials needed for batteries, along with what it takes to mine, refine and process these elemental substances.” (Battery manufacture is very energy-demanding.)

So caution is in order. However attractive battery power may seem, it isn’t an immediate panacea for the problem of the corrosive impact of fossil fuels on the environment, or climate change.

But as we move – slowly but inevitably – towards more renewable sources of energy, batteries are likely to be increasingly in the mix. We’re seeing it with the (slow but inevitable) growth of interest in EVs, why not in home appliances. Even for those of us still living on the grid, we may be living in battery-operated homes in the not too distant future.

 

Celebrating the 4th of July

Fireworks have always been one of my favorite parts of 4th of July celebrations. There are few things that rival watching the skies light up on a warm summer’s night. (Bonus points if the fireworks cap off a band concert!)

Fireworks, of course, have been around for a lot longer than we’ve been celebrating our country’s Independence Day. They are believed to have originated in China in the second century BC. Back then, state of the art technology was tossing bamboo stalks into a fire. When the hollow air pockets in the stalks heated up, the bamboo exploded. Fast forward a thousand years or so, and the Chinese discovered the earliest form of gunpowder. The gunpowder was packed into the bamboo stalks air pockets, and fireworks became a lot more exciting.

The notion of fireworks emigrated to Europe in the 13th century, where the Italians were the first nation to take them up, and by the 15th century fireworks were widely used in religious and other celebrations. Interestingly, the Italians are still associated with fireworks. Two of the world’s largest makers of fireworks – US-based Grucci and Zambelli – have roots in Italy, and were founded by Italian immigrants.

Over the years, fireworks displays have gotten more and more intricate, and sophisticated technology is often used:

Planning a professional fireworks show involves significant preparation, much of which is done using computers and simulation programs. Choreography software tools like Finale Fireworks or ShowSim allow designers to preview their creations without needing actual explosives. Interactive 3D perspectives enable creative developers to visualize the show from various audience viewpoints.

Before the advent of “e-match [electric igniters],” pyrotechnics were manually fired using switch panels. Now, however, script files can be exported to the firing system when after the show has been fully programmed, ensuring precise timing for each launch and explosion. This advancement leads to better and safer shows. (Source: EEPower)

Because of this, fireworks displays can showcase complex patterns, often using letters and numbers to spell out messages.

For all their beauty, there are a number of safety and environmental concerns involving fireworks. While professionally designed and operated shows have a good track record when it comes to safety, amateurs playing around with consumer fireworks (which are illegal in many states) end up with thousands of injuries in the US every year.

Then there are the environmental implications:

The launch of fireworks introduces various chemicals into the atmosphere, including oxidizers like perchlorates, which can contaminate water sources when they dissolve. Fireworks can create smoke and particulate matter, leading to poor air quality for hours after the display. Additionally, the metallic compounds and salts used to produce the colors in fireworks can harm both people and the environment.

Fireworks particles can be especially harmful to those with respiratory-related diseases such as asthma and COPD, and those with heart conditions.

Not surprisingly, fireworks – especially those of the amateur variety – have been known to start fires, so there can be an effect on air quality that goes beyond the original impacts the fireworks themselves cause.

Because of these concerns, many are looking for new ways to create aerial displays to help celebrate the 4th. Increasingly, electronics are used for light displays, which can run the gamut from lighting up buildings and monuments with red, white, and blue to “more elaborate displays, such as laser shows or choreographed light shows.”

Some cities are eliminating traditional fireworks displays in favor of drones:

Drone light shows are typically created using a fleet of drones equipped with LED lights. The drones are programmed to fly in formation and create specific patterns or images. The designs can be simple, such as a heart or a star, or more complex, such as a patriotic flag or a corporate logo. Drones can create messages in the sky by flying in formation and carrying lights, a newer technique that is becoming increasingly popular.

Then there’s augmented reality.

Augmented reality fireworks aren’t physical fireworks. They are images projected onto a digital device. AR fireworks can be used to create a variety of effects, like making it appear as if fireworks are exploding in the sky overhead or shooting out of your hands.

I’m not that wild about the idea of AR fireworks. For all their faults, traditional fireworks displays bring communities together. AR fireworks apps – which had their moment during the pandemic – are an individual experience. Personally, I’d rather hear the crowd oohing and aahing as together we enjoy the same display – even if it’s a laser or drone show. (Just as long as we don’t revert to hollow bamboo stalks tossed in the fire.)

Anyway, however you celebrate it, wishing all our US readers a Glorious Fourth of July.

 

 


Source of Image: Syracuse.com

 

Power vs. Performance: some stories never get old

Juggling competing demands is a challenge as old as technology. When it comes to embedded systems, the story is often about how to maximize performance while minimizing power consumption. It’s a story that never gets old, and in a recent embedded.com article, Emily Newton has a few suggestions for best practices, design steps that will help “achieve good power management in embedded systems while ensuring they meet or exceed customers’ needs and expectations.”

First up: pay attention to device architectures, separating hardware from software when you’re plotting your architecture out, a separation that enables “independent testing and development, and forces the issue on focusing on hardware constraints. At this point in the design process, designers articulate what their power management targets are – and how they’re going to meet those targets. Awareness of what constraints you’re dealing with, and what resources are available to handle your constraints, “guides future choices.”

Having a detailed device architecture also provides the opportunity to define data inputs (information types) and outputs (data sinks); to specify the role subsystems will play; and to think through the protocols and communication approaches that will meet interface and component needs.

Once people create comprehensive device architectures, they can understand how certain choices could affect specific system parts or operations. Seeing that interdependence helps designers make the best decisions to optimize performance while saving power.

Newton has plenty of suggestions for other areas as well. One is to include in your design a block diagram illustrating “average power consumption and minimum energy used.” Understanding how programming relates to energy consumption “can show which parts of the design need further improvements and whether the selected battery for the system is sufficient to maintain reliable performance.”

Some programming choices that will reduce power consumption while maintaining performance are keeping the circuit board’s operating voltage as low as possible and selecting integrated circuits that align with the power-saving goals.

Another thing that should be looked at is turning the system, and its components, off when they’re not in use, which “will curb unnecessary power usage.” She also suggests utilizing operating system features that can keep power consumption down.

Newton also addresses some basics: making good choices when it comes to materials, and being very strategic when it comes to deciding the system features that are absolutely necessary. (As we all know, there’s always a features continuum that ranges from essential to “nice to have” to absolutely unnecessary.) Decisions on features will definitely impact power consumption and performance.

Newton stresses the importance of testing to make sure that power management and performance live up to expectations.

All tests should mimic real-life conditions as closely as possible. That’s the best way to check that the power management in embedded systems will provide customers with consistently excellent performance, providing the reliability they demand and expect…Plan your testing schedule to include embedded and software testing. Whereas the second type only involves software, the former encompasses all the system’s aspects.

The story wraps up with a reminder that there are always emerging technologies that engineers should be open to using. (An example she cites is machine learning.) She adds:

Some designers also find digital twins useful in their process, particularly when improving power management in embedded systems that are more complicated than most. The digital twin environment allows studying how different power management decisions affect performance before implementing these options in real life.

There’s nothing all that revelatory or novel in Newton’s article, but it’s still a story that, when it comes to designing embedded systems, never gets old.

 


Source of Image: Dreamstime

 

 

The Industrial Internet of Things

I recently came across an EETimes article on the Industrial IIOT. The article focused on the use of wearables in Industrial Internet of Things (IIOT) setting. We tend to think of wearables as items like the Fitbits used to track our fitness. In the IIOT sphere, it’s “smart glasses, wristbands, body sensors, and even smart clothing” that humans use to interact with machines and processes.

The IIOT applications where wearables include those in hazardous situations where safety is critical. Think of wearables equipped with sensors to monitor whether the wearer has been exposed to dangerous, potentially fatal, gases. Operational efficiency is another application area. Smart glasses, for example, could enable workers to read product manuals or access operating status info, hands-free, in real time.

There are plenty of challenges:

Privacy concerns, data security, and the need for robust infrastructure to handle the vast amounts of data generated are significant hurdles. Additionally, ensuring the wearables are comfortable and intuitive for workers to use is crucial for their adoption.

These challenges are surmountable. Technological advances will result in wearables that are “more powerful, efficient, and affordable, leading to wider adoption.”

There’s more to the IIOT than wearables, of course, and Tech Target has good overview of the Industrial Internet of Things (written by Alexander Gillis, Brien Posey, and Linda Rosencrance). They start with a simple definition:

The industrial internet of things (IIoT) is the use of smart sensors, actuators and other devices, such as radio frequency identification tags, to enhance manufacturing and industrial processes. These devices are networked together to provide data collection, exchange and analysis. Insights gained from this process aid in more efficiency and reliability. Also known as the industrial internet, IIoT is used in many industries, including manufacturing, energy management, utilities, oil and gas.

Then there’s this bit, which I loved:

IIoT uses the power of smart machines and real-time analytics to take advantage of the data that dumb machines have produced in industrial settings for years.

The underlying driver: when it comes to capturing, analyzing, and communicating about key data, smart machines are just plain better than humans. Even smart humans.

Connected sensors and actuators enable companies to pick up on inefficiencies and problems sooner, saving time and money while also supporting business intelligence efforts. In manufacturing specifically, IIoT has the potential to provide quality control, sustainable and green practices, supply chain traceability and overall supply chain efficiency. In an industrial setting, IIoT is key to processes such as predictive maintenance, enhanced field service, energy management and asset tracking.

The article then goes into the components that comprise the IIOT, including “cloud platforms… edge gateways, sensors, actuators, and edge nodes;” lists the many different industries (automotive, agriculture, oil and gas, utilities) where the IIOT is deployed; and goes on to describe the benefits, risks, and challenges.

The authors also provide a few thumbnail use cases :

…commercial jetliner maker Airbus launched what it calls the factory of the future, a digital manufacturing initiative to streamline operations and boost production. Airbus integrated sensors into machines and tools on the shop floor and outfitted employees with wearable tech — e.g., industrial smart glasses — aimed at reducing errors and enhancing workplace safety.

Another robotics manufacturer, Fanuc, has used sensors in its robotics, along with cloud-based data analytics, to predict the imminent failure of components in its robots. Doing so enables the plant manager to schedule maintenance at convenient times, reducing costs and averting potential downtime.

Magna Steyr, an Austrian automotive manufacturer, is using IIoT to track its assets, including tools and vehicle parts, as well as to automatically order more stock when necessary.

The IOT has become so much a part of our daily lives – sometimes, admittedly, in fairly trivial ways. It’s good to be reminded that the IIOT is out there as well, changing the way many industries work.


Image source: i-SCOOP

Technology in the NICU

I recently came across an EETimes article by Ray Lumina that describes how electroforming can be used for medical products, especially for those that help save lives in the Neonatal Intensive Care Unit (NICU).

Electroforming is a preferred method for medical technology manufacturing, as it is highly versatile and can adhere to extremely precise specifications, complexity and surface finish. Electroformed optical components are created from plated metal, electro-deposited to provide a precision reproduction of a surface. Every component is an exact replication of the mandrel, making this an economically favorable manufacturing method.

Lumina writes that electroforming is ideal for the production of medical instruments because of its ability to create “high-volume, quality components with extreme accuracy and design complexity.”

An application that he cites is the use of electroforming to create:

…custom reflectors for newborn-baby warming devices. The warmers combine advanced technologies with innovative features to deliver state-of-the-art…The custom electroformed reflector simplified a complex design, ultimately reducing production time and increasing heat output with an improved gold plating that reflects a greater amount of energy.

Electroforming is not exactly a breakthrough technology. It’s been around for nearly two hundred years, and many of the products that use it are decidedly low-tech. Think jewelry.

But the article got me to think about more high-tech instruments and devices that are used in critical, life-saving settings like the NICU.

Children’s Hospital of Orange County had a good list of the “amazing technological advances found in our NICU.” Among the technology that CHOC deploys are a “monitoring system [which] uses near infrared light spectrometry to monitor brain and kidney function.” You’re probably familiar with CPAP machines because there are plenty of TV ads aimed at adults with sleep apnea who need continuous positive airway pressure. Well, there are tiny CPAP machines that work with tiny babies. Another lung-related technology is the high frequency oscillatory ventilator.

Sophisticated telemetry monitors newborns, in real time, for seizure activity, and provides physicians with real-time access to the data on their patients. There’s something wonderful called a “giraffe bed” which is “designed to minimize any unnecessary stimulation to our babies. The beds rotate 360°, can be lowered or elevated as needed, and slide out of the temperature-controlled microenvironment to make it easier to position the baby for all types of procedures without disturbing the infant.” And of paramount importance: the giraffe beds enable parents to touch their little ones.

Like CPAP, most of us have heard of extracorporeal membrane oxygenation (ECMO), given that this technique is used with some frequency to combat COVID. “With ECMO, blood from the baby’s vein is pumped through an artificial lung where oxygen is added and carbon dioxide is removed. The blood is then returned back to the baby.”

The NICU at Children’s of Orange County also uses bar code scanning to make sure that medications, tests, and treatments are for the right patient.

These technologies aren’t unique to CHOC, of course. Other children’s hospitals, and the NICU departments of more general-purpose hospitals, also use these and similar technologies.

Having so many life-saving high-tech devices in the NICU is wonderful. If you’ve had, or know someone who’s had, a preemie or a newborn with challenging health issues, you know how critical this technology can be to give these babies a fighting chance. But the more time these babies spend in the NICU, the more likely they are to come down with a hospital-borne infection. So the goal of the medical community is to reduce NICU admissions and the length of stay (LOS) for the babies who are admitted there. A recent NIH study shows that “equipping care managers with better technological tools can lead to significant improvements in neonatal health outcomes as indicated by a reduction in NICU admissions and NICU LOS.”

And to keep babies out of the NICU, telemedicine and remote monitoring are being increasingly deployed.

Another example of how technology is life-enhancing.

 


Source for image: ResearchGate

What’s brewing with AI?

Not that I’ve given it all that much thought to it, but if I’d been asked, I don’t think that I’d have put brewing beer very high on the list of candidates for AI involvement.

Not that the beer industry is any stranger to using up-to-date technology. It’s widely used in brewing’s production and logistics processes. But making beer that tastes better? I would have said that this is more art than science. Sure, the major market share industry players with the more widely-known and consumed brands are more focused on the “science” parts of production and logistics – after all, a Corona’s a Corona and a Bud’s a Bud. And the microbrewers (not to mention the homebrewers) would come down more on the “arts” side, using trial and error to come up with the ideal mix.

Of course, even Corona and Budweiser are always introducing new products, and whether you’re one of the big guys or one of the little guys, creating a beer that tastes good isn’t easy. Figuring out whether – to borrow from an ancient Miller ad – a beer tastes great and/or is less filling can involve drafting (and educating) employees and “civilian” beer drinkers to act as taste testers for their products. But, as a recent MIT Technology Review article said, “running such sensory tasting panels is expensive, and perceptions of what tastes good can be highly subjective.”

Enter AI.

Research published in Nature Communications described how AI models are being used to find not only how consumers will rate a beer, but also how to make a beer that’s better tasting.

This wasn’t an overnight process. Over a five-year period, researchers analyzed the chemical properties and flavor compounds in 250 commercial beers.

The researchers then combined these detailed analyses with a trained tasting panel’s assessments of the beers—including hop, yeast, and malt flavors—and 180,000 reviews of the same beers taken from the popular online platform RateBeer, sampling scores for the beers’ taste, appearance, aroma, and overall quality.

This large data set, which links chemical data with sensory features, was used to train 10 machine-learning models to accurately predict a beer’s taste, smell, and mouthfeel and how likely a consumer was to rate it highly.

The result? When it came to predicting how the RateBeer reviewers had rated a beer, the AI models actually worked better than trained tasting experts. Further, the models enable the researchers “to pinpoint specific compounds that contribute to consumer appreciation of a beer: people were more likely to rate a beer highly if it contained these specific compounds. For example, the models predicted that adding lactic acid, which is present in tart-tasting sour beers, could improve other kinds of beers by making them taste fresher.”

Admittedly, having lactic acid in a beer doesn’t sound all that appealing. But if the beer tastes fresher, well, just don’t read the fine print on the ingredients list.

One area where they anticipate the AI approach will prove particularly effective is in the development of non-alcoholic beers that taste as good as the real thing. This will be great news for those who want to enjoy a beer without having to consume any alcohol.

There are other instances of AI being used in brewing. Way back in 2016, a UK AI software startup IntelligentX, came out with four beers based on their Automated Brewing Intelligence algorithm. The release of Amber AI, Black AI, Golden AI, and Pale AI caused a brief flurry of excitement as the first AI developed beer. Unfortunately, it looks like none of them made much of an impact in the beer market. When I searched for them, I couldn’t find any references beyond 2019.

Maybe the models that the Belgian researchers produced will have more luck creating a successful AI beer.

——————————————————————————————

The full research report from Nature Communications can be found here.

MIT Tech Review – Breakthrough Technologies for 2024

Each year, the MIT Technology Review takes a look at “promising technologies posed to have a real impact on the world.” And each year I enjoy taking a look through their picks.

Not surprisingly, the first technology cited was AI. After all, something AI-related has had a spot on pretty much every future/breakthrough tech list for the last couple of years. In this case, the technology that gets the nod is Generative AI which, thanks to tools like ChatGPT has “reached mass adoption in record time.” There were initial hiccups, like Microsoft’s Bing Chat generating gibberish and Google’s Bard giving out wrong information. But the technology providers rebounded quickly. Users are on their way to taking care of routine office tasks (e.g., summarizing emails) and creating images. “Never has such radical new technology gone from experimental prototype to consumer product so fast and at such scale.” What’s lagging behind is figuring out exactly what its impact will be.

We have a better idea what the impact of the first gene-editing treatment is going to be. That first treatment, from Vertex, tackles sickle-cell anemia. The good news: it works and is curing patients. The bad news: it’s prohibitively expensive – way too expensive and complex to deploy in Africa, which is where it is most needed, given that virtually all those who suffer from sickle-cell are Black. Next up: simpler, cheaper delivery mechanisms.

Heat pumps seem like a rather old-school technology to be on a list of breakthroughs, but now they’re coming into their own. Heat pumps that run on renewably-sourced electricity can decrease reliance on fossil fuel power, substantially cutting emissions. To date, heat pumps have been used largely in homes and offices, but they’ll soon be used in manufacturing, where the impact can be especially significant.

Tech Review sees Twitter killers as breakthrough, as users concerned with the problems that have beset Twitter (or X, if you like) since the takeover by Elon Musk have been drifting off that platform and onto decentralized, better-modified, and more secure sites like Meta’s Threads, which now has nearly 100 million monthly users. Even though Twitter’s traffic and number of daily users are down, Twitter’s 120 million daily users dwarf Threads’ monthly numbers.

Back on the energy front, there are enhanced geothermal systems, which use hydraulic fracturing (a.k.a., fracking) to enable energy extraction under an expanded number of geological conditions, increasing energy production. The jury’s still out on whether hydraulic fracturing will do more harm (increase in seismic activity) than good (more renewable energy), but for now Tech Review’s got eyes on.

You may have been seeing ads for appetite-suppressing weight-loss drugs (Oh, Oh, Oh, Ozempic) that are now cleared for use by those looking to lose weight, rather than just for the previous use case of diabetes control. Weight-loss drugs are expensive – upwards of $1,000 a month – and the costs are not generally covered by insurance for weight loss. There are a number of unpleasant side effects, and many have found that, once they go off the drugs after achieving their desired weight loss, they quickly gain those lost pounds back. But a lot of pharma companies are coming up with new drugs, and it may not be all that long before there’s a cure for obesity.

What else does Tech Review see as major breakthroughs?

Chiplets, the “smaller, more specialized chips [that] can extend the life of Moore’s Law.” Rather than focusing on trying to make transistors smaller and smaller, and “cramming” chips full of them, chip manufacturers have turned their sights to chiplets, purpose-built for a specific function. Emerging standards will make it possible to more easily combine chiplets from different manufacturers.

New technology is enabling the creation of super-efficient solar cells. That new technology is perovskite solar. “When silicon and perovskites work together in tandem solar cells, they can utilize more of the solar spectrum producing more electricity per cell.”

Other face computers (Google Glass, Microsoft HoloLens…) have failed to gain much traction, but Apple’s Vision Pro, with a much better display, may be the breakthrough device the market has been waiting for. Apple calls it a spatial computer, the Vision Pro is “the company’s first mixed-reality headset.” It differs from virtual reality in that it “overlays digital content onto your real-world surroundings. Vision Pro utilizes two micro-OLED displays, which produces a higher res experience than what you get from other VR headsets, which use liquid crystal displays. At $3,499, the Vision Pro is ultra-expensive, but it’s Apple so folks may be willing to pay up.

Finally, exascale computers “capable of crunching a quintillion operations per second are expanding the limits of what scientists can simulate.” Quintillion, huh? Remember when we used to think a billion was a big number? Anyway, the big question hovering over these super-duper computers is energy consumption.

That’s it for the Tech Review’s 2024 technology breakthroughs. A pretty good list. Do you think they missed anything?

Battery Progress

In the tech world, we’re pretty much accustomed to cycles of perpetual improvement. For years – make that decades – the semiconductor industry lived by the Moore’s Law mantra, which held that, every two or so years, the number of transistors in an integrated circle doubled. Moore’s Law may be slowing down, and there’s even some debate over whether the “Law” (it’s not really a law; it’s an observation) still holds. But there’s no denying that Moore’s Law helped drive semi manufacturers along the path to continuous improvement. And there’s no denying that today we all carry in our pockets a powerful computing device that’s far beyond what most were imagining when Gordon Moore first posited his law in 1965.

What’s proven largely true for transistors on ICs generally holds throughout the technology ecosystem. Those powerful devices in our pockets connect to the Internet at speeds that are a far cry from the connection rates we were happy with in the days of dialing up to check our AOL mail and browse through a bulletin board.

Across the boards in the tech world, things do tend to get smaller, faster, cheaper, and smarter.

So it has been in the battery world. As Bill Schweber wrote in his recent EE Times piece:

We’ve become accustomed to improvements in battery performance and technology, especially due to the development of lithium-based cells in the 1980s. By whatever figure of merit (FOM) you use, batteries have dramatically improved in capabilities while costs have come down.

Schweber writes that how you view improvements in batteries is dependent on what matters for different use cases. For instance, weight doesn’t matter so much for a fixed power supply installation that’s not going anywhere. It’s a different matter when the battery’s powering an electronic vehicle.

But overall, the history of batteries has been one of forward progress since they were invented in their first form in the early years of the 19th century by Allesandro Volta, whose name lends itself to volt, voltage, voltometer…While batteries have been around for over 200 years, it was only in the early 20th century that dry cell batteries usable for mass market applications – like flashlights and toys – first came to market. An early use of rechargeable batteries was for electronic vehicles, which in the early 1900’s commanded about a one-third market share in the US.

Battery capacity has been a major factor when it comes to improvements in batteries. These improvements came along at nothing like the Moore’s Law rate, but steadily. But:

…the real breakthrough came with the development of lithium-based battery chemistry, developed in large part by American chemist John B. Goodenaoug and others in the 1980s.

(This breakthrough was good enough to earn Goodenaoug, M. Stanley Whittingham, and Akira Yoshino the Nobel Prize in Chemistry in 2019.)

Batteries continue to improve, with significant decreases in the cost per stored watt. A lot of R&D money is being invested in better batteries. The EV industry is banking on cheaper, lighter batteries. But can improvements continue?

Not necessarily at the high, sustainable rates that many are counting on.

Which takes us back to Moore’s Law.

Here’s Schwebeter:

I think the present battery optimism is a consequence of the success of the semiconductor industry, which, somewhat incorrectly, has induced the expectation that all it will take is lots of smart people and investment. Since the development of the integrated circuit in 1959, we’ve steadily improved density and performance by orders of magnitude. Bolstered by these “roadmaps,” we’ve made future progress look relatively easy and largely predictable—even if it wasn’t.

So the life of perpetually improving batteries may be draining:

Could it be that we are approaching a plateau in what any battery chemistry and improvements will allow? While there will be progress, it may be incremental—on the order of a few percentage points per year—certainly not enough to meet the expectations of politicians, pundits and the public. There’s no assurance of battery breakthroughs, no matter how much is invested. Perhaps a non-chemical battery innovation is needed?

Interesting, isn’t it, how we get so used to expecting radical progress?

Smart vending machines (really smart). Who knew?

At Critical Link, we stock our kitchen with snacks – free snacks – so we’re a vending machine free environment.

Not that I have anything against vending machines.

Who among us hasn’t stood in front of one, looking through row upon row of goodies, debating whether to hit E2 (M&M’s)  or B6 (Doritos), and hoping after you pressed your magic number (B6) that your snack (Doritos) gets released and you don’t have to wrestle with the machine to shake it loose?

But I haven’t given much thought to vending machine technology since they replaced the mechanical pull knobs with the state-of-the-art pushbuttons way back one.

As it turns out, vending machines have gotten smart. Really smart. Facial recognition smart.

I learned this after coming across an article about Canada’s University of Waterloo, which is replacing M&M/Mars branded smart vending machines on its campus with dumbed-down machines that aren’t gathering facial recognition data.

The issue emerged when a Waterloo student, innocently hoping to buy some M&M’s, saw an error message pop up that read “Invenda.Vending.FacialRecognitionApp.exe,” indicating that there’d been an app failure to launch.

This prompted the student to post a pic of the error message on Reddit and ask why a candy machine would have facial recognition.

The Reddit post sparked an investigation from a fourth-year student named River Stanley, who was writing for a university publication called mathNEWS.

Stanley sounded the alarm after consulting Invenda sales brochures that promised “the machines are capable of sending estimated ages and genders” of every person who used the machines—without ever requesting their consent. (Source: ArsTechnica)

When Silver reached out to them for a comment, Adaria Vending Services, which stocks and services the machines in question, they responded:

“…what’s most important to understand is that the machines do not take or store any photos or images, and an individual person cannot be identified using the technology in the machines. The technology acts as a motion sensor that detects faces, so the machine knows when to activate the purchasing interface—never taking or storing images of customers.”

Further adding that the machines comply with the EU’s strict General Data Protection Regulation privacy law.

Invenda, which makes the machines, pointed out that their sensing app, despite its name (Invenda.Vending.FacialRecognitionApp.exe) is used for detection and analysis of “basic demographic attributes” and data like how long someone spends making their choice.

“The vending machine technology functions as a motion sensor, activating the purchasing interface upon detecting individuals, without the capability to capture, retain, or transmit imagery. Data acquisition is limited to assessing foot traffic at the vending machine and transactional conversion rates.”

I would think that much of the useful information you can get out of a vending machine can be gotten the old-fashioned way: counting the volume purchased of different items when the machine is being restocked. But more nuanced information would, of course, be useful to both Invenda and M&M/Mars. Analysis might yield intelligence on where to position the more valued items. Info on gender and age (which I’m assuming are among the basic demographics gathered) of those who look but doesn’t purchase could help them figure out how to capture the business of those non-purchasers.

Still, I don’t blame the students for not wanting vending machines smart enough to do surveillance on their campus.

The University of Waterloo, by the way, is a big STEM school, known for science, computer science, engineering, math…You gotta love the fact that it was the techies who jumped on this case and sleuthed it out!