Skip to main content

Discover the Future of FPGA Innovation with Altera & Critical Link

Join us on February 11th, 2025, at 9 AM PT for an exclusive LinkedIn Live event that will redefine the possibilities of FPGA technology. Hear from industry leaders and innovators as they share insights, strategies, and success stories driving the future of tech.

Gain insights into the latest FPGA advancements, Altera’s strategies, and how we are shaping the FPGA landscape with our customers and partners.

– Accelerating Innovation with Altera: Hear from Altera CEO Sandra Rivera as she shares Altera’s vision and how investments in cutting-edge silicon and software will drive sustainable, long-term success as an independent company.
– Spotlight / Success Stories: Discover how industry leaders like Tejas Networks, DigiKey, and Critical Link leverage Altera’s solutions to innovate and achieve breakthrough results, demonstrating Altera’s commitment to driving customer innovation through focus, quality, and execution.
– Altera Partner Program: Learn more about Altera’s new partner program and get to market faster with end-to-end support and resources from Altera’s broad partner ecosystem.

Speakers:
– Sandra Rivera, Chief Executive Officer of Altera
– Dave Doherty, President of Digikey
– Arnob Roy, Co-founder, Executive Director, COO of Tejas Networks
– Tom Catalino, Co-founder, Vice President of Critical Link

 

REGISTER TODAY!

AI-driven robots that can help folks live more independently? Bring it!

A couple of weeks ago, I posted about attempts to help alleviate one of the downsides of AI: its power consumption, and the environmental impacts that comes with.

What I didn’t mention in that post is that there are small ways in which we can help reduce AI-related power consumption, and that’s by not buying into any “AI-for everything” madness, in which AI is used even when it’s not needed and/or adds limited value.

As consumers, do our home appliances have to be all that smart? Do we really need AI to tell us what’s in the fridge when we can just open the door and look? Do we need to deploy AI for every Google search we do? Shouldn’t we be willing to “wait” an extra second for a non-AI search that may well yield superior results, by the way. (Public Service announcement: if you don’t want Google to produce an AI Overview for every search you do, use the Web option when you search.)

But there are plenty of applications where AI can and should be used in the home. And one was chronicled last spring in MIT’s Technology Review.

More than twenty years ago, Henry Evans – then only 40 years old – suffered a major stroke and ended up a quadriplegic who was unable to speak. Over the years, he was able to use his eyes and a letter board to communicate, but in most day-to-day situations, Henry has to rely on caregivers.

Then robotics came on to the scene. Sort of.

In 2010, Henry saw a demo of a primitive “metal butler,” and asked himself why something like that wouldn’t work for him.

There was a solid reason why not. While engineers have made great progress in getting robots to work in tightly controlled environments like labs and factories, the home has proved difficult to design for. Out in the real, messy world, furniture and floor plans differ wildly; children and pets can jump in a robot’s way; and clothes that need folding come in different shapes, colors, and sizes. Managing such unpredictable settings and varied conditions has been beyond the capabilities of even the most advanced robot prototypes.

But thanks to AI, that may be about to change, giving robots the opportunity to advance beyond the skills that are driven by purpose-built software and to acquire new skills and figure out new environments faster than they ever could before.

Henry Evans has already been working with experimental robots that are letting him take care of tasks like brushing his hair. “Stretch,” the robot Henry is currently working with – the brainchild of Georgia Tech professor Charlie Kemp – goes beyond specific-purpose tasks, like hair-brushing, and lets users “plug in their own AI models and use them to do experiments.” As Stretch learns more, it can do more.

With AI-software-powered robots, robots will be able to acquire new skills automatically rather than have to solve each problem independently and by having to plot each element in “excruciating detail.” Not that these painstakingly acquired skills aren’t impressive. Who hasn’t marveled at a video of humanoid (or dog-oid) robots – climbing stairs, boogeying, opening doors? Now research is moving beyond pure physical dexterity and are now experimenting with “building ‘general purpose robot brains’ in the form of neural networks, and tapping generative AI in ways that go beyond “the realm of text, images, and videos and into the domain or robot movements.” This will allow robots to quickly adapt to new environments and quickly learn new tasks.

The cost (and size) of robots will come down, their utility will go up, and for folks like Henry Evans, the world will open up. It already is. While Stretch is imperfect – it’s buggy and bulky – it’s a declaration of independence. “All I do is lay in bed, and now I can do things for myself that involve manipulating my physical environment.” These include playing with his granddaughter, holding his own hand of cards, and eating a fruit kabob.

I’ve always maintained that one of the very best things about being an engineer is doing work that really does improve people’s lives.

AI-driven robots that can help folks live more independently? Bring it!

 

_________________________________________________________

Source of Image (Henry Evans giving his wife Jane a rose): IEEE Spectrum

 

Making AI More Energy Efficient

Whatever your feelings are about AI – It’s all great! It’s going to kill us all! You gotta take the bad with the good! It all depends! I’m not quite sure – yet! – most of us recognize that while AI is growing as a force, and will revolutionize entire industries, it does consume an awful lot of energy, leading to concerns about whether it is environmentally sustainable.

Consider these observations, based on the work of the International Energy Agency (IEA)

One of the areas with the fastest-growing demand for energy is the form of machine learning called generative AI, which requires a lot of energy for training and a lot of energy for producing answers to queries. Training a large language model like OpenAI’s GPT-3, for example, uses nearly 1,300 megawatt-hours (MWh) of electricity, the annual consumption of about 130 US homes. According to the IEA, a single Google search takes 0.3 watt-hours of electricity, while a ChatGPT request takes 2.9 watt-hours. (An incandescent light bulb draws an average of 60 watt-hours of juice.) If ChatGPT were integrated into the 9 billion searches done each day, the IEA says, the electricity demand would increase by 10 terawatt-hours a year — the amount consumed by about 1.5 million European Union residents. (Source: Vox)

Given stats like this, and the fact that demand for AI – and the power it consumes along the way – is rapidly growing, it’s no wonder that engineers and researchers are focusing on ways tamp down/slow down AI’s energy demands.

Writing recently in the EE Times, Simran Khoka notes that “data centers, central to AI computations, currently consume about 1% of global electricity—a figure that could rise to 3% to 8% in the next few decades if present trends persist,” adding that there are other environmental impacts that AI brings with it, such as e-waste and the water usage required for data center cooling. She notes that IBM is one of the leaders in creating analog chips for AI apps. These chips deploy phase-change memory (PCM) technology.

PCM technology alters the material phase between crystalline and amorphous states, enabling high-density storage and swift access times—qualities essential for efficient AI data processing. In IBM’s design, PCM is employed to emulate synaptic weights in artificial neural networks, thus facilitating energy-efficient learning and inference processes.

IBM is not alone. Khoka cites a couple of the little guys: Mythic, which:

…has engineered analog AI processors that amalgamate memory and computation. This integration allows AI tasks to be executed directly within memory, minimizing data movement and enhancing energy efficiency.

She also writes about Rain Neuromorphic which is developing chips “process signals continuously and perform neuronal computations, making them ideal for creating scalable and adaptable AI systems that learn and respond in real time.”

Applications well suited to analog chips include edge computing, neuromorphic computing, and AI inference and training.

A principal challenge that switching to analog chips presents is ensuring that they have the same precision and accuracy that digital chips yield. Another hurdle is that, at present, the infrastructure behind AI systems is digital.

It’s no surprise that MIT is keeping its eye on ways to reduce the energy consumption of voracious AI models. MIT’s Lincoln Lab Supercomputing Cener (LLSC) is finding that by capping power and slightly increasing task time, energy consumption of GPUs can be substantially reduced. The trade-off: tasks may take 3 percent longer, but energy consumption is lowered by 12-15 percent. With power-capping constraints in place, the Lincoln Lab supercomputers are also running a lot cooler, decreasing demand placed on cooling systems – and keeping hardware in service longer. (And something as simple as running jobs at night, when it’s cooler, or in the winter, can greatly reduce cooling needs.)

LLSC is also looking at ways to improve how efficiently AI models are trained and used.

When training models, AI developers often focus on improving accuracy, and they build upon previous models as a starting point. To achieve the desired output, they have to figure out what parameters to use, and getting it right can take testing thousands of configurations. This process, called hyperparameter optimization, is one area LLSC researchers have found ripe for cutting down energy waste.

“We’ve developed a model that basically looks at the rate at which a given configuration is learning,” [LLSC senior staff member Vijay] Gadepally says. Given that rate, their model predicts the likely performance. Underperforming models are stopped early. “We can give you a very accurate estimate early on that the best model will be in this top 10 of 100 models running,” he says.

Jettisoning models that are slow learners has resulted in a whopping “80 percent reduction in energy used for model training.”

Whatever your feelings about AI, it’s comforting to know that there are plenty of folks out there trying to ensure that AI’s power consumption will be held in check.

 


Image sourc: Emerj Insights

What’s brewing with AI?

Not that I’ve given it all that much thought to it, but if I’d been asked, I don’t think that I’d have put brewing beer very high on the list of candidates for AI involvement.

Not that the beer industry is any stranger to using up-to-date technology. It’s widely used in brewing’s production and logistics processes. But making beer that tastes better? I would have said that this is more art than science. Sure, the major market share industry players with the more widely-known and consumed brands are more focused on the “science” parts of production and logistics – after all, a Corona’s a Corona and a Bud’s a Bud. And the microbrewers (not to mention the homebrewers) would come down more on the “arts” side, using trial and error to come up with the ideal mix.

Of course, even Corona and Budweiser are always introducing new products, and whether you’re one of the big guys or one of the little guys, creating a beer that tastes good isn’t easy. Figuring out whether – to borrow from an ancient Miller ad – a beer tastes great and/or is less filling can involve drafting (and educating) employees and “civilian” beer drinkers to act as taste testers for their products. But, as a recent MIT Technology Review article said, “running such sensory tasting panels is expensive, and perceptions of what tastes good can be highly subjective.”

Enter AI.

Research published in Nature Communications described how AI models are being used to find not only how consumers will rate a beer, but also how to make a beer that’s better tasting.

This wasn’t an overnight process. Over a five-year period, researchers analyzed the chemical properties and flavor compounds in 250 commercial beers.

The researchers then combined these detailed analyses with a trained tasting panel’s assessments of the beers—including hop, yeast, and malt flavors—and 180,000 reviews of the same beers taken from the popular online platform RateBeer, sampling scores for the beers’ taste, appearance, aroma, and overall quality.

This large data set, which links chemical data with sensory features, was used to train 10 machine-learning models to accurately predict a beer’s taste, smell, and mouthfeel and how likely a consumer was to rate it highly.

The result? When it came to predicting how the RateBeer reviewers had rated a beer, the AI models actually worked better than trained tasting experts. Further, the models enable the researchers “to pinpoint specific compounds that contribute to consumer appreciation of a beer: people were more likely to rate a beer highly if it contained these specific compounds. For example, the models predicted that adding lactic acid, which is present in tart-tasting sour beers, could improve other kinds of beers by making them taste fresher.”

Admittedly, having lactic acid in a beer doesn’t sound all that appealing. But if the beer tastes fresher, well, just don’t read the fine print on the ingredients list.

One area where they anticipate the AI approach will prove particularly effective is in the development of non-alcoholic beers that taste as good as the real thing. This will be great news for those who want to enjoy a beer without having to consume any alcohol.

There are other instances of AI being used in brewing. Way back in 2016, a UK AI software startup IntelligentX, came out with four beers based on their Automated Brewing Intelligence algorithm. The release of Amber AI, Black AI, Golden AI, and Pale AI caused a brief flurry of excitement as the first AI developed beer. Unfortunately, it looks like none of them made much of an impact in the beer market. When I searched for them, I couldn’t find any references beyond 2019.

Maybe the models that the Belgian researchers produced will have more luck creating a successful AI beer.

——————————————————————————————

The full research report from Nature Communications can be found here.

Critical Link Introduces Agilex 5 SoC FPGA Solutions

Syracuse, N.Y. – April 30, 2024 Critical Link, LLC, a leading US-based manufacturer of FPGA, DSP, and CPU-based System on Modules, is pleased to announce new embedded solutions around the AgilexTM 5 SoC FPGA E-Series from Altera®. Critical Link is developing two product families around the Agilex 5 SoC FPGA E-Series: a single board computer and a system on module (SOM) family.

The MitySBC-A5E single-board computer was developed as part of the Agilex 5 SoC FPGA Early Access Program and will be the first to market. The MitySBC-A5E features a 32mm x 32mm Agilex 5 SoC FPGA E-Series with 656K LE FPGA fabric, dual-core Cortex-A55, dual-core Cortex-A76, PCIe 3.0, and 24 transceivers up to 17Gbps. The board includes 8GB LPDDR4 for the HPS, 8GB LPDDR4 for the FPGA, 64GB eMMC, microSD, and QSPI NOR for configuration. A rich set of interfaces, including 8 MIPI x4 lanes, 2.5G Ethernet, FMC, USB-C & USB 2, among others, make this a powerful solution for embedded product development teams working on next-generation industrial performance applications.

The MitySBC-A5E will be available as a development kit as well as a production-suitable single-board computer for customers interested in achieving first-to-market advantages with the high-performance, low-power Agilex 5 SoC FPGA. The product datasheet and other documentation are available today, and customers will benefit from Critical Link’s engineering and application support for the life of their product.

“Critical Link has been partnering with Altera and Intel for more than 10 years, helping customers reach the market fast with next-generation products based on the latest FPGA technology,” says Tom Catalino, Vice President and Founder of Critical Link, LLC. “We are excited to lead the next wave of FPGA-based designs and bring the Agilex 5 SoC FPGA power and performance advantages to our customers.”

Following the introduction of the single board computer, Critical Link is bringing the MitySOM®-A5E family of system-on-modules to market later this year. The MitySOM-A5E family will offer a wide range of FPGA densities, memory configurations, optional transceivers, and temperature ranges all in a compact 51mm x 71mm (2.0” x 2.8”) form factor to fit most applications. These modules are designed for long-term availability and support, meaning customers can confidently design them into long lifespan products in the test & measurement, medical/scientific, defense, and energy/utilities industries. 

Prototypes for the MitySBC-A5E will be available in Q2 2024, with production in early 2025. System on Module prototypes are expected later this year with production to start in 2025.

 

For more details on the MitySBC-A5E Single Board Computer, visit: https://www.criticallink.com/product/mitysbc-a5e-single-board-computer/.

For more details on the MitySOM-A5E System on Module family, visit: https://www.criticallink.com/product/mitysom-a5e/.

 

ABOUT THE COMPANY:

Critical Link, LLC (Syracuse, NY www.criticallink.com), founded in 1997, develops system on modules (SOMs) for electronic applications. Our MitySOM® and MityDSP® families incorporate the latest FPGA, DSP, and CPU technologies, and are designed for long product lifespan and performance in the field. We supply OEMs in a wide range of industries including manufacturing, medical, scientific, defense, and energy/utilities. We ship worldwide and are franchised with many of the top electronics distributors. 

Critical Link, LLC, is privately held and is ISO 9001:2015 Registered by SRI Quality System Registrar. Critical Link is a Gold-level member of the Intel® Partner Alliance.

 

*Altera Agilex 5 FPGA D-series (A5D031, mid-speed grade) vs. AMD/Xilinx Versal (VM1102, -3HS speed grade) at 90% utilization, 600-Mhz, using vendors power estimator calculators.

Altera, the Altera logo, and other Altera marks are trademarks of Altera or its subsidiaries.

Smart vending machines (really smart). Who knew?

At Critical Link, we stock our kitchen with snacks – free snacks – so we’re a vending machine free environment.

Not that I have anything against vending machines.

Who among us hasn’t stood in front of one, looking through row upon row of goodies, debating whether to hit E2 (M&M’s)  or B6 (Doritos), and hoping after you pressed your magic number (B6) that your snack (Doritos) gets released and you don’t have to wrestle with the machine to shake it loose?

But I haven’t given much thought to vending machine technology since they replaced the mechanical pull knobs with the state-of-the-art pushbuttons way back one.

As it turns out, vending machines have gotten smart. Really smart. Facial recognition smart.

I learned this after coming across an article about Canada’s University of Waterloo, which is replacing M&M/Mars branded smart vending machines on its campus with dumbed-down machines that aren’t gathering facial recognition data.

The issue emerged when a Waterloo student, innocently hoping to buy some M&M’s, saw an error message pop up that read “Invenda.Vending.FacialRecognitionApp.exe,” indicating that there’d been an app failure to launch.

This prompted the student to post a pic of the error message on Reddit and ask why a candy machine would have facial recognition.

The Reddit post sparked an investigation from a fourth-year student named River Stanley, who was writing for a university publication called mathNEWS.

Stanley sounded the alarm after consulting Invenda sales brochures that promised “the machines are capable of sending estimated ages and genders” of every person who used the machines—without ever requesting their consent. (Source: ArsTechnica)

When Silver reached out to them for a comment, Adaria Vending Services, which stocks and services the machines in question, they responded:

“…what’s most important to understand is that the machines do not take or store any photos or images, and an individual person cannot be identified using the technology in the machines. The technology acts as a motion sensor that detects faces, so the machine knows when to activate the purchasing interface—never taking or storing images of customers.”

Further adding that the machines comply with the EU’s strict General Data Protection Regulation privacy law.

Invenda, which makes the machines, pointed out that their sensing app, despite its name (Invenda.Vending.FacialRecognitionApp.exe) is used for detection and analysis of “basic demographic attributes” and data like how long someone spends making their choice.

“The vending machine technology functions as a motion sensor, activating the purchasing interface upon detecting individuals, without the capability to capture, retain, or transmit imagery. Data acquisition is limited to assessing foot traffic at the vending machine and transactional conversion rates.”

I would think that much of the useful information you can get out of a vending machine can be gotten the old-fashioned way: counting the volume purchased of different items when the machine is being restocked. But more nuanced information would, of course, be useful to both Invenda and M&M/Mars. Analysis might yield intelligence on where to position the more valued items. Info on gender and age (which I’m assuming are among the basic demographics gathered) of those who look but doesn’t purchase could help them figure out how to capture the business of those non-purchasers.

Still, I don’t blame the students for not wanting vending machines smart enough to do surveillance on their campus.

The University of Waterloo, by the way, is a big STEM school, known for science, computer science, engineering, math…You gotta love the fact that it was the techies who jumped on this case and sleuthed it out!

How smart is your shopping cart?

With Thanksgiving soon upon us, most Americans will be going grocery shopping for turkey and all the fixings if they’re hosting, or for the ingredients for a side if they’ve got an invite to someone else’s dinner.

Some shoppers will no doubt be shopping the old-fashioned way, piling their groceries into their cart and getting in the queue for a living, breathing cashier. Others will be doing self-checkout, doing their own RFID scanning and bagging. Still others may find themselves pushing an AI-powered smart cart.

According to a Forbes article I saw a while back, for supermarkets, the ability to totally avoid the checkout process may be the “killer app” that will bring shoppers who’ve fled to big-box, mass market retailers like Walma

rt or warehouse clubs like Costco back into the traditional brick-and-mortar grocery shopping sphere.

“There is so much room for technology to improve friction in grocery stores, most especially the pain of waiting in line for a cashier, not to mention problems that arise in self-checkout when you are buying more than a handful of stuff,” said John Harmon, Coresight’s senior retail/technology analyst. (Source: Forbes)

Grocery stores have been deploying technology to improve the shopping experience, including the ability to order online – with or without in-store pickup. But shoppers, for the most part, prefer to pick out their own produce, meats, and deli. Some stores, including Wegman’s, a major chain in my area, tried “scan and go” approaches, in which shoppers using a mobile app scanned and bagged as the wandered the aisles. Alas, this approach led to losses that exceeded the losses stemming from self-checkout systems. (It’s easier to keep an eye on shoppers when they’re in a checkout area rather than scanning their items throughout the store. Wegman’s, among other stores, eliminated its scan and go option.

The hope is that smart carts will give “customers the convenience of cashier-less, pick-and-go unattended checkout,” and the stores will experience less shrinkage.

What smart cards will do, of course, is require stores to invest in a lot of technology: cameras and shelf sensors everywhere. Some smart cart tech providers offer systems that clip on to existing grocery carts; others offer fully tricked-out grocery carts.

Last month, embedded.com had a piece that drilled down on the camera technology used by smart carts. In Implementing multi-camera synchronization for retail smart carts, Maharajan Veerabahu writes about some of the cutting-edge technology that will help bring about the smart cart revolution by “leveraging customer identification, tracking, and product recognition algorithms.”

For starters, he defines multi-camera synchronization, which is essential for getting a read on the items that a shopper is putting in their smart cart.

Multi-camera synchronization begins at the hardware frame-level by interconnecting all cameras through a “Master”-“Slave” configuration. This setup allows for individual camera control and the ability to select the number of cameras to be streamed. Such systems enable multi-image capturing, multi-video recording, and multi-network streaming.

The synchronization between multiple cameras can be achieved through hardware and software trigger modes, each having its own advantages and limitations. (Source: embedded.com)

He then goes on to outline some of the challenges developers face – simultaneous readout and low latency – and then looks at the right approach for implementing multi-camera synchronization. Veerabahu describes a solution that calls for six cameras to take in the entire shopping cart.

…by connecting four USB cameras to cover the corners and two MIPI cameras in the center, creating a complete image. The synchronization between the cameras should be for a single interface – either USB or MIPI. Therefore, it is necessary to design the smart cart device to support both USB and MIPI cameras.

Multi-camera synchronization will require “either a hardware trigger or a software trigger mode.” In his instance, “these modes are implemented in USB camera firmware and MIPI camera drivers.” Veerabahu then expands on how the hardware and software triggers are used. He ends his piece by addressing the latency challenge.

In the previously mentioned setup with four USB cameras on an embedded platform, the USB bandwidth may become limiting. To overcome this limitation, increasing the line time of sensor readout can help utilize the available bandwidth effectively. By doing so, the blanking between frame readouts can also be increased, optimizing bandwidth utilization while avoiding frame corruption.

There are a couple of Wegmans in upstate NY that are experimenting with smart carts, but neither is close by. So whether we end up going through the human-powered checkout line, or do self-checkout, the makings of our Thanksgiving feast will be placed in an old-fashioned, not very smart, regular old grocery cart.

Happy Thanksgiving!

Taking a look at Accenture’s Technology Vision 2023 – Part 2 of 2

In my last post, I took at look at the first two trends (digital identity, big/bigger data) discussed in Accenture’s Technology Vision 2023 report. This time around, I’ll see what else they had to say about the technology trends that they see being the most impactful to the convergence of the physical and digital worlds and, hence, to the enterprises Accenture works with.

Not surprisingly, AI is on their list, in this case as the trend toward Generalizing AI.

AI has been around for a good long while, but it was the 2020 release of OpenAI’s GPT-3, the largest language model to date, that really began to turn heads. What GPT-3 did was show off breakthrough capabilities, “teaching itself to perform tasks it had never been trained on, and outperforming models that were trained on those tasks.” All of a sudden, a model didn’t have to be created to perform a specific task within its data modality (e.g., text, images). We’re now heading into multimodal model territory “which are trained on multiple types of data (like text, image, video, or sound) and to identify the relationships between them,” and have hundreds of millions, even trillions, of parameters. Game changer! We’re still not replacing humans quite yet, but Accenture cites one “generalist agent” that can perform and seamlessly switch between more than 600 tasks, including chatting, captioning imaging, playing video games, and operating a robotic arm.

Generalizing AI is made possible thanks to a couple of important innovations. Transformer models:

…are neural networks that identify and track relationships in sequential data (like the words in a sentence), to learn how they depend on and influence each other. They are typically trained via self-supervised learning, which for a large language model could mean pouring through billions of blocks of text, hiding words from itself, guessing what they are based on surrounding context, and repeating until it can predict those words with high accuracy. This technique works well for other types of sequential data too: some multimodal text-to-image generators work by predicting clusters of pixels based on their surroundings.

Scale is the second innovation here. Increased computing power enables transformer models to vastly increase the number of parameters that can be incorporated in the model. (Trillions, anyone?) This yields both greater accuracy and enables the models to learn new tasks.

The Accenture report culminates in an exploration of what they term “the big bang of computing and science,” a feedback loop between technology and science where breakthroughs in one domain spur breakthroughs in the other – all occurring at hyper speed.

In this section, Accenture describes how science and technology are pushing the envelope in several different industries. In materials and energy, supercomputers operating at exascale will enable chemists to perform molecular simulations with greater accuracy, coming up with new materials to tackle problems such as climate change. As we push up against the inevitable limits of even the most powerful supercomputers, the shift to quantum computing in the chemistry field will step in.

New rocket and satellite technologies are enabling scientists to conduct more experiments in space, where the ability to work in the unique conditions of space are “accelerating what we can learn about fluid physics, diseases, materials, climate change, and more, to improve life on Earth.” A decrease in the costs of components and an increase in the involvement of the private sector mean that the once-prohibitive costs of experimentation in space are coming down. There’s even a startup offering “digital lab space.”

In biology, the computing-science “big bang” has brough about “an entirely new field: synthetic biology…[which] combines engineering principles with biology to create new organisms or enhancing existing ones.” This has implications for any number of life’s necessity: food, drugs, fuels. The cost of DNA sequencing and synthesis are having a Moore’s Law moment, cutting in half every two years. (I didn’t check the arithmetic – I’ll trust Accenture here! – but in 2001, sequencing the human genome was $100 million. Today, it’s about $600.

The Accenture report is totally free. (You don’t even have to sign up to access it.) Always interesting to see what intelligent observers have to say about what’s happening in the world of science and technology.

AI isn’t quite human-like. Yet. But it’s getting closer

There’s been plenty of buzz in the last month or so about AI chatbots. And there’s no doubt about it, AI is making chatbots more human-like. Still, it looks like we still have a bit of time before the human race is completely replaced by machines.

Out of the gate in early February was Google with Bard, its AI chatbot. Unfortunately, in its first demo, Bard gave an answer that was quickly shown to be wrong.

To err is human, I suppose. So in making a factual mistake, Bard might have been passing the Turing Test of sorts. (The Turing Test evaluates whether a machine can give responses that are indistinguishable from those a human would provide.)

The question Bard flubbed was a claim that the James Webb Space Telescope (JWST) “took the very first pictures of a planet outside of our own solar system.” In fact, those first pictures had been taken nearly a decade before the JWST launched.

…a major problem for AI chatbots like ChatGPT and Bard is their tendency to confidently state incorrect information as fact. The systems frequently “hallucinate” — that is, make up information — because they are essentially autocomplete systems.

Rather than querying a database of proven facts to answer questions, they are trained on huge corpora of text and analyze patterns to determine which word follows the next in any given sentence. In other words, they are probabilistic, not deterministic. (Source: The Verge)

And if you think this little factual error doesn’t matter, Google’s stock price dropped 8% the following day.

Perhaps hoping to upstage their friends at Google, the next day Microsoft began introducing a new version of Bing, their search engine. Bing is a very small player in search. The most recent numbers I saw gave Bing a market share of about 3% vs. Google’s 93%. I’m sure they’re hoping that a Bing that’s really smart will close that gap. The new Bing incorporates a customized version of chat that’s running on OpenAI’s large language model ChatGPT. The new Bing promises to provide complex responses to questions – replete with footnotes – as well as to assist creative types with their poetry, stories, and songs. It’s been made available to a limited number of previewers, and there’s a long waitlist of those hoping to get a go at it.

Unfortunately, the new Bing went a big rogue.

…people who tried it out this past week found that the tool, built on the popular ChatGPT system, could quickly veer into some strange territory. It showed signs of defensiveness over its name with a Washington Post reporter and told a New York Times columnist that it wanted to break up his marriage. It also claimed an Associated Press reporter was “being compared to Hitler because you are one of the most evil and worst people in history.”

Microsoft officials earlier this week blamed the behavior on “very long chat sessions” that tended to “confuse” the AI system. By trying to reflect the tone of its questioners, the chatbot sometimes responded in “a style we didn’t intend,” they noted. Those glitches prompted the company to announce late Friday that it started limiting Bing chats to five questions and replies per session with a total of 50 in a day. At the end of each session, the person must click a “broom” icon to refocus the AI system and get a “fresh start.” (Source: Washington Post)

Again, I guess you could say that getting confused and lashing out are actually very human traits. Still, if the expectation is that AI chatbots will be factual, relevant, and polite, it appears that they aren’t yet ready for primetime.

Not to be outdone, in late February, Meta released LLaMA, an AI language generator.

LLaMA isn’t like ChatGPT or Bing; it’s not a system that anyone can talk to. Rather, it’s a research tool that Meta says it’s sharing in the hope of “democratizing access in this important, fast-changing field.” In other words: to help experts tease out the problems of AI language models, from bias and toxicity to their tendency to simply make up information. (Source: The Verge)

Of course, Meta had its own AI chatbot fiasco in November with Galactica. Unlike Bing and Bard, which are general purpose, Galactica’s large language model was supposedly expertly built for science.

A fundamental problem with Galactica is that it is not able to distinguish truth from falsehood, a basic requirement for a language model designed to generate scientific text. People found that it made up fake papers (sometimes attributing them to real authors), and generated wiki articles about the history of bears in space as readily as ones about protein complexes and the speed of light. It’s easy to spot fiction when it involves space bears, but harder with a subject users may not know much about.  (Source: Technology Review)

One thing to insult a newspaper reporter; quite another to make up scientific papers.

Looks like us humans are safe for a while. For now.

Chip Design via AI

Google is always up to something interesting, and one of the interesting things they’ve been up to is using Artificial Intelligence (AI) to automate the chip design process. The first place they’re deploying their new model is to design their next-gen tensor processing units (TPUs). These are the processors, used in Google’s data centers, that are tasked with increasing the performance of AI apps. So, AI deployed to help accelerate AI.

The [Google] researchers used a dataset of 10,000 chip layouts to feed a machine-learning model, which was then trained with reinforcement learning. It emerged that in only six hours, the model could generate a design that optimizes the placement of different components on the chip, to create a final layout that satisfies operational requirements such as processing speed and power efficiency. (Source: ZDNet)

Six hours, eh? That’s fast!

The specific task that Google’s algorithms tackled is known as “floorplanning.” This usually requires human designers who work with the aid of computer tools to find the optimal layout on a silicon die for a chip’s sub-systems. These components include things like CPUs, GPUs, and memory cores, which are connected together using tens of kilometers of minuscule wiring. Deciding where to place each component on a die affects the eventual speed and efficiency of the chip. And, given both the scale of chip manufacture and computational cycles, nanometer-changes in placement can end up having huge effects. (Source: The Verge)

This is the sort of work that could take engineers months to accomplish when done manually.

Optimizing chip layouts is a complex, intricate process. Processors contain millions of logic gates (standard cells), and macro (memory) blocks in the thousands. The “floorplanning” process that decides where to put the standard cells and macro blocks is critical, impacting how rapidly signals can be transmitted. Figuring out where to put the macro blocks comes first, and there are trillions+ of possibilities.  Google researchers state that “there are a potential ten to the power of 2,500 different configurations to put to the test.” And given that Moore’s Law still seems to be with us – you remember Moore’s Law: the number of transistors on a chip doubles every year – there are ever more combinations to worry about.

Obviously, no one’s putting trillions of configurations to the test. Engineers rely on experience and expertise to create their floorplans. But AI can evaluate many different options, and no doubt come up with ones that even the best engineers might have missed.

Once the macro blocks are in place, the standard cells and wiring are added. Then there’s the inevitable revising and adjusting iterations. Using AI in this process is going to free up engineers to focus on more custom work, rather than having to spend their time on how to take care of component assembly.

The acceleration of chip design is not going to immediately solve the chip shortage crisis, which is at the fab rather than the design level. Still, over time, if next gen chips can be designed faster, it should have positive impacts throughout the supply chain.

One of the most fascinating revelations was that the floorplan created by AI (that’s “b”, to the right) looks more random and scattershot than the very neat and orderly layout (“a” on the left) created by a human engineer. (This illustration is from Nature, which published a paper on the Google AI work.)

Inevitably, when we see AI being deployed, we ask ourselves whether AI, robots, machine learning will replace us humans.

Personally, I’m pretty sure that human engineers are still good for a while. There will always be work for engineers in a world that increasingly relies on technology in just about every aspect of our lives. There’s no denying that AI is going to take on some of the tasks that traditionally have been in human hands, but who knows what new opportunities this will create for knowledgeable and highly-skilled engineers like the ones we have here at Critical Link. Oh, brave new world!