Skip to main content

After a decade, there’s a new Arm architecture

Early this year, after nearly a decade since the release of the Armv8, Arm introduced its new Armv9 architecture. This new architecture is in response to the ever-increasing demand for the specialized processing required to keep up with the explosive growth of the Internet of Things, AI applications, and 5G.

Security is the principal focus of the Armv9:

…the Armv9 roadmap introduces the Arm Confidential Compute Architecture (CCA). Confidential computing shields portions of code and data from access or modification while in-use, even from privileged software, by performing computation in a hardware-based secure environment. (Source: Arm)

This means the “commercially sensitive data and code”, in-use, at rest or in transit, will be protected from everything else in the system.

The new Armv9 architecture also address the need to support AI workloads, as AI elements are increasingly part of devices and apps. To handle AI requirements, Arm has joined forces with Fujitsu to develop Scalable Vector Extension technology that supports machine learning and DSP (our old friend!) capabilities for apps like image processing and smart home devices.

With Armv9, Arm also expects to boost CPU performance, which will be accomplished by deploying Arm’s Total Compute design methodology.

Additionally, Arm is developing several technologies to increase frequency, bandwidth, and cache size, and reduce memory latency to maximize the performance of Armv9-based CPUs.

ZDNet took the occasion of the big Arm announcement to produce an informative article titled “Arm Processors: Everything You Need to Know.” It may not be quite everything you need (or want) to know about Arm, but the piece is detailed (and pretty lengthy: settle in for a good long read).

Some of that read is devoted to Arm’s business model, which is not to build processor components but, rather, to license component design – Arm’s intellectual property – for others to work with. (Critical Link incorporates Arm-based components from makers like Intel and Texas Instruments in some of our SOMs.) There’s also a good explanation of what it means when a device is built around x86 (as in, e.g., “Intel Inside”) vs. when a device is designed around Arm.  And if you’re interested in the relationship between Arm and Apple, this may be the article for you. (There’s also a section on the Arm-Nvidia deal, which may or may not happen. Regulators world-wide are looking at that one. Stay tuned!)

Things got going for me when the more technical discussion began.

There’s a good section on the merits of RISC (Reduced Instruction Set Computer) computing – which is where the “r” in “Arm” comes from – vs. CISC (Complex Instruction Set Computers (CISC) when it comes to efficiency and customizability. This section is followed by more discussion of the difference between an Arm processor and an x86 CPU, and an explanation of the difference between an Arm chip and a GPU.

The article then presents a summary of the design classes of Arm processors that are in production today. Critical Link has several offerings that incorporate Cortex-A design, which is considered the “workforce of the Arm family.”

As originally conceived, the client looking to build a system around Cortex-A had a particular application in mind for it, such as a digital audio amplifier, digital video processor, the microcontroller for a fire suppression system, or a sophisticated heart rate monitor. As things turned out, Cortex-A ended up being the heart of two emerging classes of device: single-board computers capable of being programmed for a variety of applications, such as cash register processing; and most importantly of all, smartphones. Importantly, Cortex-A processors include memory management units (MMU) on-chip…The principal tool in Cortex-A’s arsenal is its advanced single-instruction, multiple-data (SIMD) instruction set, code-named NEON, which executes instructions like accessing memory and processing data in parallel over a larger set of vectors. Imagine pulling into a filling station and loading up with enough fuel for 8 or 16 tanks, and you’ll get the basic idea.

I guess that’s why we like it!

There’s also the Cortex-R class, used primarily for “microcontroller applications that require real-time processing,” and the Cortex-M, “a more miniaturized form factor.” Ethos-N processors are deployed for neural network processing; the Ethos-U works as a co-processor, often in conjunction with Cortex-A. Neoverse is for servers and data centers. Secure Core is used in smartcard and embedded security apps.

Our  latest SOMs (the MitySOM-AM57F and MitySOM-AM57X) have dual Cortex-M4s on board as part of the TI AM57x processor architecture.

The article continues with a few more technical sections, then concludes with a brief history of Arm processors.

Throughout the article, there are links to a number of different articles/videos you make want to graze through if you have the time and want to learn more. And speaking of learning more, there’s a good summary of the new Armv9 architecture on embedded.com. Happy reading!

When the chips are down…

Even if you haven’t been directly impacted – at least not yet – if you’re reading this post, you’re most likely aware of the current chip shortage. And, if nothing else, you’re most likely to have heard about the chip shortage with respect to the automotive industry. Many of the big automakers have announced shutdowns due to the lack of chips. Amazing to think that the production of end goods that cost tens of thousands of dollars have screeched to a halt because manufacturers can’t get a hold of a component that costs a few bucks! And of course, it’s not just the automotive industry that’s impacted. With the explosion of the Internet of Things, chips are pretty much embedded everywhere. This leads, of course, to greater demand for cloud computing and data center capacity. And 5G technology is rolling out, with demand for new smartphones that can take advantage of higher speeds.

So how did the shortage come about?

When the pandemic first hit, there was a cascade of unanticipated events. It all started with market sectors such as automotive anticipating a drop in demand for their products, prompting them to reduce their demand for chips accordingly. Immediately, their freed-up capacity was claimed by markets that anticipated spikes in demand, such as PCs and electronics.

However, both the drop in demand and following recovery happened much quicker than anticipated. Soon enough, due to reporting delays in supply and demand across the entire value chain and capacity being already reallocated, many businesses ended up facing a chip shortage that halted their entire manufacturing process. (Source: EE Times)

There were several other events that contributed to the problem.

Texas is home to a number of large semiconductor foundries. When that state experienced disastrous power outages, production time was lost at several of them. Then there was a fire at a big Japanese fab.

Then there are the special circumstances that impact the automotive industry:

Over the past decade, foundries have focused their investments in leading nodes such as 5nm and 7nm due to their profitability and high demand from high tech companies. This resulted in a lack of investment in the older nodes that sectors such as automotive typically rely on, further aggravating the shortage by making it even more difficult to find capacity for older nodes somewhere else.

Moreover, the automotive industry has some of the most stringent requirements in their qualification process. Once chips are qualified, it’s not easy to move them to another fab due to the many processes involved and the longer time required to get to a steady state of production.

Intel, TSMC and Samsung are all investing in new plants, but there’s a lead time of a year or two before they’ll be operational.

Not surprisingly, Critical Link is feeling an impact of the chip shortage. And while I can’t deny that it’s been painful, we are faring better than others in our industry. That’s because we’ve been proactively managing build plans ahead of demand for years, and we have multi-pronged supply chains identified to keep material flowing. Even with those strategies in place, we still have team members dedicating some of their valuable time every day searching for the chips we need. Fingers crossed that everything that we’ve ordered gets delivered on time, and that we can continue to discover additional inventory as time progresses.

Like our colleagues everywhere, we are, of course, looking forward to supply catching up with – and even going well beyond – demand. Bring on the glut! In the meantime, it’s been a hassle. But so far, so good.

A brand new Mayflower is “setting sail”, and it’s a technological marvel

Thanks to the pandemic, we missed out last year on the 400th anniversary of the voyage of the Mayflower from Plymouth, England to Plymouth, Massachusetts. The Mayflower left port in September 1620 and landed in what was to become Massachusetts in December of that year. But, better late than never, there’ll soon be a more modern Mayflower setting sail – well, sail may not be the right wording exactly – and this one will have zero humans on board. (Projected launch date: early June.)

The original Mayflower packed in 102 Pilgrims, plus 30 crew members, and the technology they had was pretty rudimentary. Astrolabe, quadrant, cross staff, compass. That and a primitive map were, at best, how the Mayflower made its way across the Atlantic. (I suspect that the Pilgrims were also praying their way to the New World while contending with terrible conditions: poor food, overcrowding, lack of water – they drank beer instead, seasickness, bad weather, cold, damp, foul air, rats, and, of course, the uncertainty about what they’d find when they reached the other side.)

In contrast to the thin technology available in 1620, the Mayflower Autonomous Ship (SA) is just packed to the gills with technology.

What’s on board this baby?

  • 6 AI powered cameras
  • 30 onboard Sensors
  • 15 edge devices

And, as noted, no captain, no crew:

With no human captain or onboard crew, MAS uses the power of AI and automation to traverse the ocean in its quest for data and discovery.

The ship’s AI Captain performs a similar role to a human captain. Assimilating data from a number of sources, it constantly assesses its route, status and mission, and makes decisions about what to do next. Cameras and computer vision systems scan the horizon for hazards, and streams of meteorological data reveal potentially dangerous storms. Machine learning and automation software ensure that decisions are safe and in-line with collision regulations.

Small, lightweight edge devices provide just enough local compute power for the ship to operate independently, even without connectivity or remote control. When a connection becomes available, the systems sync with the cloud, enabling updates and data upload. (Source: MAS400.com)x

While this excursion is tied to a belated celebration of the four-hundredth anniversary of the original Mayflower, its purpose is scientific exploration. It will be gathering all sorts of data that will help researchers better understand what’s going on with the oceans. And, given the critical role that the oceans play when it comes to the earth’s climate, we can use all the data and understanding we can get. The information the Mayflower will gather, and the analyses this will enable, will also help improve navigation safety. And, of course, it will help bring us closer to the day when all modes of autonomous transportation – from cars, to trucks, to planes, to ships at sea – are deployed. (No, I’m not ignoring trains here. There are already autonomous trains in operation in a number of places, largely for rapid public transportation and freight hauling.)

The website for the Mayflower AS is well worth a look. Among other things, it lays out a scenario and details the sensory inputs, real time analytics, and outcomes for managing the scenario, based on the data and analysis.

I’ll be following the voyage of the Mayflower AS with interest, and wish them, as the sailors say, “fair winds and following seas.”

From EE Times Europe: Five Critical Trends in Embedded Technology

I know that I’ve been focusing on trends of late, but then I (belatedly: it’s from last November) came across a trend piece from EE Times Europe that looked specifically at 5 critical trends in embedded technology. Embedded technology is what Critical Link is all about, so here we go. (I promise that this will be the last trend piece. Until I see another one of interest. Promise.)

Wireless connections: the growth is nothing short of explosive
This section begins by noting that the market for all things IoT is forecasted to hit $1 Trillion by 2025. That’s Trillion with a T. (I’m old enough to remember when a Billion with a B was a big number.) Wherever you go, wherever you look, whether you’re in your home, your car, your workplace, on the factory floor, in a store or restaurant (now that COVID restrictions are being relaxed) or just out and about, the Internet of Things is almost but not quite ubiquitous.

Enter the 5G era, which:

…will push transmission speeds to 20 Gbps; 6G promises to be even faster. And with the availability of low-power wide-area networks (LPWANs), along with chips and devices from suppliers such as NimbeLink, Sequans, MultiTechElektronik, Digi, and Telit, the explosive growth of wireless connections will continue to accelerate.

If the IoT isn’t yet everywhere, it will be.

Speed matters. A lot.
With the availability of 5G networks, there’ll be parallel demand for increased processor speed. Soon enough, there’ll be new kids on the block. We all know about 8- and 16-core processors. And now there are 128-core processors available from Intel and AMD. Make way for the “5-nm-node–based 192-core processor [Arm] has on its roadmap.” Now that’s what I call fast.

What’s also fast is quantum computing, which focuses on areas that require massive computing power, like scientific research and machine learning. Quantum computing is a way away. There are a few kinks to be worked out in terms of measuring and controlling quantum bits (qbits). But it’s coming. And computing speed will increase radically when it gets there.

Cyberattacks aren’t going anywhere.
In fact, they’re getting worse. There are more of them, and as we rely more and more on the IoT to run our daily lives, they’re becoming more and more disruptive. And dangerous.

In September, the death of a patient in Germany was directly linked to a cyberattack. She needed urgent medical care, but a ransomware attack on the Düsseldorf hospital to which she had been rushed for care prevented the facility from providing the needed services to save her life. The patient was rerouted to a hospital 20 miles away, but it was too late. This was the first report of a tragedy of this kind, but it may not be the last.

In response, cybersecurity efforts are increasing:

Embedded security software and hardware are growing exponentially. New hardware designs increasingly have built-in security silicon from Infineon, Microchip, STMicroelectronics, Micron, Winbond, and others.

Let’s hope that this is a race that cybersecurity will win, but we know that no one in this sphere will be resting on their laurels.

What do you know? VR is becoming a reality.
Virtual reality – and its cousins augmented reality (AR), mixed or merged reality (MR), and extended reality (XR) – are no longer just about gaming, but are being used in a varied industrial and commercial apps.

VR applications that can increase productivity span a broad range, including medicine, manufacturing, training, and entertainment. For example, a doctor-in-training requires many hands-on experiences, which may be costly to set up if not difficult to find. With VR, a simulation can be set up for the medical intern to practice without the risk of making a serious error on a real patient.

As the technology is perfected, we may find that more and more of our face-to-face encounters are with holographs.

More and more, artificial intelligence is making its way into embedded design.
AI is becoming as ubiquitous as the Internet of Things. Whether we know it or not, it’s in use when we make purchases online, seek customer support, check out who’s on our doorstep, let the car park itself, or (increasingly) ask our doctor for a diagnosis. That’s just the personal stuff. AI is also used extensively in industrial control systems and on the shop floor. And there’s more of it coming:

Today, the smart factory equipped with IoT and AI can increase productivity by monitoring the operation in real time and having AI make decisions that avoid operational errors. In the long run, AI could do much more. In one vision of the future factory, the facility would use AI and robotics to retool itself on the fly for production of a different product. An assembly line set up to build medical devices one day, for example, might build wearable smartwatches the next day.

Meanwhile, the price of AI hardware is coming down, so more embedded designs will likely be adding AI capabilities.

What’s next?
Embedded technologies will be incorporating all these technologies. Expect more, better, faster.

 

MIT Tech Review’s Breakthrough Technologies for 2021 – Part 2

In my most recent post, I summarized the first five technologies listed in the MIT Technology Review’s annual roundup of breakthrough technologies. Here’s my summary of the final five.

The Review’s section on hyper-accurate positioning begins with an anecdote about a recent technology success in China. Last summer, the country’s global navigation satellite system was able to forewarn a village in danger of a landslide that its citizens were in peril. The villagers were evacuated and there was no loss of life. This disaster-monitoring/early-warning system is just one of many applications where GPS plays a role.

Precision agriculture, drone delivery, logistics, ride-hailing, and air travel all depend on highly accurate position detection from space. Now a series of deployments and upgrades are boosting the accuracy of the world’s most powerful global satellite positioning systems from several meters to a few centimeters.

When GPS was first introduced nearly twenty years ago, positioning accuracy was between five to ten meters. It’s now approaching the one-to-three-meter range. And they’re even talking about centimeters! New, non-satellite technologies are in the works, including an “approach [that] uses the quantum properties of matter to locate and navigate without outside references.” Lots of exciting things happening in this space, that’s for sure.

Whether we like it or not, the pandemic has accelerated the trend toward remote everything. Online conferencing/meeting tools, and work-from-home for remote access have been around for a good long while. What’s changed is that a lot more people are using them, and this is accelerating our acceptance of online in all aspects of our life. Most of us will go back to celebrating holidays with families in person, rather than on Zoom. But will be going on as many face-to-face sales calls as we used to?

Two areas that Tech Review points to are learning and healthcare. Most of the parents I know (and most of the kids) were more than happy to get back into the physical classroom. But teachers have become more attuned to and skilled at incorporating technology into their teaching methods. And online tutoring that addresses the specific learning styles and needs of learners will likely become more widely used.

Never say never, but telemedicine isn’t going to fully replace visits to the doctor’s office any time soon. Still, we’re seeing that many routine visits and check-ins can be accomplished through online “house calls” and through monitoring technology.

And one thing to note: perfecting remote learning and medicine will have tremendous benefit for those living in remote and underserved areas.

AI and robots are getting smarter about plenty of things, but for all the advances in neural networks, they’re still not all that adept when presented with something that’s new and unfamiliar. That’s because they still can’t operate like humans do, learning by combining sensory input (sights, sounds) with words to create meaning. Multi-skilled AI, “with access to both the sensory and linguistic ‘modes’ of human intelligence, should give rise to a more robust kind of AI that can adapt more easily to new situations or problems.” The first application area: computer vision.

I don’t imagine it will have much of an impact on my life, but I’ll take the Tech Review’s word for it that they’ve awarded TikTok recommendation algorithms a place on their breakthrough technologies list. TikTok’s “For You” matters because “it’s flipped the script on who can get famous online.”

While other platforms are geared more toward highlighting content with mass appeal, TikTok’s algorithms seem just as likely to pluck a new creator out of obscurity as they are to feature a known star. And they’re particularly adept at feeding relevant content to niche communities of users who share a particular interest or identity.

Who knows? Maybe you, too, can get TikTok famous online!

Green hydrogen is the final technology that makes it onto the breakthrough list. Green hydrogen will play a critical role in achieving clean energy goals.

Hydrogen has always been an intriguing possible replacement for fossil fuels. It burns cleanly, emitting no carbon dioxide; it’s energy dense, so it’s a good way to store power from on-and-off renewable sources; and you can make liquid synthetic fuels that are drop-in replacements for gasoline or diesel. But most hydrogen up to now has been made from natural gas; the process is dirty and energy intensive.

Enter solar and wind power, which are dropping in price as they’re perfected and their use becomes more widespread. “Simply zap water with electricity, and presto, you’ve got hydrogen.” Relative to hydrogen produced from natural gas, solar and wind are still more expensive, but that’s changing and should continue to change.

The Tech Review led its list off with a shoutout to Messenger RNA. Given the pandemic, and the role that mRNA vaccines are playing in getting us through it, this certainly should have come in first place. But I’m not sure if there’s any rank ordering when it comes to the rest of the breakthrough technologies cited. I can’t be the only person out there who thinks that green hydrogen, which will help us move away from fossil fuel, is more important than TikTok algorithms that help nobodies become famous. (But maybe that’s just me!)

MIT Tech Review’s Breakthrough Technologies for 2021 – Part 1

Each year, the MIT Technology Review publishes a list of the most important technologies on the scene. This year, they lead off with what, by anyone’s reckoning, has to be THE technology of the year, if not the decade: Messenger RNA. This technology has been under development for 20 years, but had not been used in any drug on the market – until last year, when both Moderna and Pfizer used mRNA to jumpstart development of coronavirus vaccines. Unlike vaccines that use a weakened version of germ to set off an immune response, mRNA-based vaccines “teach” cells how to create a protein that will trigger immunity.

Beyond potentially ending the pandemic, the vaccine breakthrough is showing how messenger RNA may offer a new approach to building drugs.

In the near future, researchers believe, shots that deliver temporary instructions into cells could lead to vaccines against herpes and malaria, better flu vaccines, and, if the covid-19 germ keeps mutating, updated coronavirus vaccinations, too.

But researchers also see a future well beyond vaccines. They think the technology will permit cheap gene fixes for cancer, sickle-cell disease, and maybe even HIV.

After a year now of life on semi-hold for so many of us, a year that’s seen so much illness and death, it would be enough if “all” that mRNA did was help end the pandemic. That it holds so much promise to take on so many other illnesses is definitely a bonus.

This blog post was NOT written using GPT-3, the natural language computer model that’s been producing the written word that, to date, is the closest to what an actual human could do. “Trained on the text of thousands of books and most of the internet, GPT-3 can mimic human-written text with uncanny – and at times bizarre realism, making it the most impressive language model yet produced using machine learning.”

That’s the good news, I guess. The not so good news? The realistic-sounding text produced may be pure nonsense, or it may be out and out misinformation. And if there’s one thing we don’t need out there, it’s more misinformation. Another downside is the tremendous amount of computational power it consumes. So far, the human writer has a far smaller carbon footprint. Still, we can certainly expect that this technology will continue to be perfected, and that chatbots will be getting more authentically chattier.

Most of us have an awful lot of personal data out there. We do a lot of shopping on Amazon. We bank online. We’re on social media. We take those online quizzes to see what Harry Potter character we’re most like. (Some of us do, anyway: not me!) We do Yelp reviews. We spit and swab so that we can find out what village in Italy our great-grandparents hailed from. And whether “our information has been leaked, hacked, [or] sold and resold”, who knows what organizations are using that data, and for what purposes.

Data trusts offer one alternative approach that some governments are starting to explore. A data trust is a legal entity that collects and manages people’s personal data on their behalf. Though the structure and function of these trusts are still being defined, and many questions remain, data trusts are notable for offering a potential solution to long-standing problems in privacy and security.

Improved privacy and security? Bring it on!

Electric or other alternative fuel vehicles are certainly the automotive future, but for now, for all the hype, they’re not for everyone. The lithium-ion batteries that run e-vehicles are just too problematic. They’re costly, have relatively limited range, and plug-in “refueling” is a) not as available as gas stations; and b) takes too long when compared to the quick convenience of pulling up to a pump and filling ‘er up. Lithium-metal batteries may be the answer. A company called QuantumScape says that its battery “could boost the range of an EV by 80% and can be rapidly recharged.” They’re still a way off from ready for prime time, but QuantumScape has inked a deal with Volkswagen, and hopes to be selling EV uses this new technology by 2025. So not that far off.

Digital contact tracing was a breakthrough technology that, frankly, hasn’t panned out all that well in terms of helping halt covid’s spread, even though a number of companies were out the gate with applications early on during the pandemic. The reasons had less to do with the technology than with messaging, adoption, privacy concerns, and other issues that had nothing to do with whether Bluetooth could ping nearby phones. We’ll have to keep this in mind if and when the next pandemic rears its ugly head. Technology on its own isn’t going to be the answer.

That’s it for the first five breakthrough technologies. Next time, we’ll take a look at the other five technologies that the Tech Review has on their list.

Roadkill

Source: Greenbiz.com

A few weeks back, I came across a brief article that piqued my interest. I can’t remember all the details, but it was about some researcher who’s exploring the idea of embedding sensors in humans to help prevent autonomous vehicles from running them over.

Sensors in humans aren’t all that new. They’ve been used in devices like pacemakers for years. And while it’s not widely in practice, humans can get microchipped in the same way that they chip their pets. Still, embedding sensors in a human so that they don’t become roadkill to a self-driving car chocked full of sensors, well, that was something different.

I made a mental note to look up the article, but can’t seem to find it. I don’t think I dreamed that I saw this story. But my Google searches came up empty when I went hunting for it.

What I did find was a piece by Colin Barnden on EE Times that asked the provocative question: “Are We Prepared for Killer Robo-Drivers? 

My short, personal answer is, no, I’m not. But as someone who’s interested in both technology (sensors, AI, robots) and the automotive world, I went along for the ride.

And as someone who’s been watching the news about autonomous vehicles over the years, I know that the idea that they’ll becoming more of the norm is always just around the corner. The corner, however, always seems to turn into another corner. So far, purely autonomous vehicles have proved to be problematic other than under perfect conditions: fine weather, the road clear as far as the eye can see, no heedless pedestrians stepping off the curb while texting…

Sooner or later, self-driving vehicles will arrive. But one of the promises has been that when they take to the road, the number of people killed in road-related accidents will be reduced. (Worldwide, more than 1.3 million people are killed in traffic accidents each year.) And that’s not going to happen any time soon.

Thus, Barnden’s question(s):

Are we really prepared for killer robo-drivers? Has anyone seriously thought through the legal implications of software killing other road users? How does public opinion of robo-drivers change as the body count rises? How will lawmakers and regulators respond to that change in public opinion? How is justice seen to be done when robo-drivers kill? We don’t know.

Looks like the automotive and legal/political worlds will have a while to work on resolving these issues.

Last month, AAA issued a report entitled, “Today’s Vehicle Technology Must Walk So Self-Driving Cars Can Run” which detailed the results of their annual survey. One of the key findings was that, when it comes to where drivers want research and development money to go, they’d rather see improvements in vehicle safety systems than self-driving cars.

…Only 22% of people feel manufacturers should focus on developing self-driving vehicles. The majority of drivers (80%) say they want current vehicle safety systems, like automatic emergency braking and lane keeping assistance, to work better and more than half — 58% — said they want these systems in their next vehicle.

One of the problems that automakers will have to solve is that, when more complex technologies are introduced in “regular” vehicles on the way to fully autonomous ones, it takes too long for the human driver to take over from automation as needed. As of now, it takes abut 40 seconds. That’s a long time when life and death are at stake.

Five years ago, some experts were predicting that we’d be seeing autonomous vehicles on the road by this year. Now they’re saying that the realization of this vision is still a decade or two away. Plenty of time to figure out the legalities of the situation. But with that long time horizon, it’s no wonder that some folks are trying to figure out to help things along by embedding sensors in humans so that the self-driving cars don’t mow them down. Wonder where they’ll find their guinea pigs?

 

In the market for a new TV?

As I suspect is true of many folks, since the pandemic hit, I’ve been watching more TV than is my norm. So I’m grateful that there are so many options to choose from – you really can’t ever say that “there’s nothing on.” And I’m grateful for the quality of the TV’s themselves.

I don’t remember the days of black & white only, of three channels, of TV’s embedded in mahogany consoles. But my parents would talk about the “good old days” of the aluminum antenna on the roof that was always blowing over. Of tinfoil on rabbit ears to improve the blurry, snowy reception. And I am old enough to remember big boxy TV’s that didn’t always have a clear picture. There’ve been so many improvements over the years: the proliferation of cable services, the introduction of flat screen TVs, entertainment on demand, the advent of high definition…

All of this made the TV watching experience more enjoyable, that’s for sure. When I sit here watching Syracuse basketball or a movie I always wanted to see but never got around to, I’m happy that the resolution is so precise, the colors so true, the detail so amazing.

While I’m grateful for the quality, I’m satisfied with what I can get off the shelf at Best Buy – without having to pay big bucks for the latest technology (organic light-emitting diode (OLEDtelevision). Let alone going for the LG rollable OLED, which was rolled out last year with a hefty $87,000 price tag.

The LG rollable apparently had a major flaw. While it could be rolled up, you had to stow it in a box to keep it out of sight. Which was seemingly too much effort for some consumers. So C SEED, responding to what no doubt was overwhelming demand from the market, has come out with a 165-inch TV – That’s nearly 14 feet! How big is your living room? – that disappears into your floor when not in use.

Rather than OLED screen technology, which is flexible – and thus lets the LG OLED roll up – C SEED’s new M1 TV uses microLEDs.

MicroLEDs, which many consider to be the future of screen technology, combine the best features of the current leading screen technologies with self-illuminated RGB pixels that don’t require a backlight, andwithout the degrading organic compounds that are used to manufacture OLED displays. The new screen tech is also more energy efficient, allows for slimmer screens, and can produce whites and blacks that rival the best TVs currently on the market.

The only downside is that microLED displays can’t fold like OLEDs can—at least yet. So to make a 165-inch TV disappear into the floor, C SEED has instead designed the M1 to first separate into five separate panels that fold into each other like a giant fan. That’s the other advantage of microLED screen technology: It allows much larger TVs to be assembled from smaller panels while perfectly hiding all the seams, so the final result looks like one giant uniform display. (Source: Gizmodo)

Looks like there are a couple of other downsides. One is the construction work you’d need to do to get the TV to disappear into the floor. The other is the price: $400,000. And, yes, you read that right. Talk about sticker shock! Makes the $87K prices tag for the LG rollable almost (but not quite) seem reasonable.

Just in case you’re interested in the tech specs, there they are from the C SEED M1 brochure.

PHYSICAL DIMENSIONS TV

LED TV size (diagonal) inch/mm 165 / 4196

LED TV size (width) inch/mm 144 / 3657

LED TV size (height) inch/mm 81 / 2057

Standard LED screen (depth) inch/mm 3.9 / 100

LED TV area sq.ft./m² 81 / 7.53

Total system weight kg 1,350

TV SYSTEM

Resolution 4K (UHD)

Brightness nits 1000

Pixel pitch mm 0.9

Processing depth bit 16 per color

Color spectrum colors 64 billion

Refresh rate Hz 1,920

Lifespan LED h 100,000

Contrast ratio 30,000:1

Color temperature K 6,500 – 9,000

Viewing angle- horizontal | vertical degrees 160 | 140

Operating temperature range °C 0 – 40

Broadband speaker peak out W 2 x 250

Broadband speaker frequency range Hz 40 – 22,000

Subwoofer peak out W 1 x 700

Subwoofer frequency range Hz 24 – 200

INPUT / OUTPUT

Video input 1 x HDMI, HDCP 2.2 support

Serial in/output 2 x USB, 1 x RS232

Audio output 11.2, independent sub

Network connection 1 x RJ45

OPERATION

Power supply LED screen 3 x 400V+N+PE/32A/50-60Hz AC (3~)

Input power max | typical W/m² 480 | 160

Power consumption max | typical kW 3,6 | 1,2

SPECIFIC FEATURES

Adaptive Gap Calibration (AGC)

Automatic TV Cover with Flush Mount Floor Option

Sound system 2.1

4 individual colors (black, silver, gold, titanium)

C SEED calls the TV watching experience with the M1 “breathtaking.” For $400K it ought to be!

A timely primer on wind power

I’ve been following the terrible weather situation in Texas, watching as the people there have been struggling with fierce cold, snow, and ice that’s well beyond what they’re used to. Of course, the big story in Texas is the inability of their power grid to keep up with demand.

Wind is a big part of that grid, and accounts for a bit under one-quarter of Texas energy use. The adverse weather conditions have put a crimp in the wind supply, as turbines have frozen up, but wind has actually done better than natural gas (the main source of energy there) and nuclear in terms of meeting winter output expectations.

Anyway, since wind power is in the news, I thought I’d read up a bit on it. And ended up deciding to do a little primer on it.

Wind power has, of course, been around for a good long time. Think of all those windmills in the Netherlands. And those windpumps you see in pictures of farms and ranches, used to help water the stock. What those big wind turbines we see popping up are not used for a specific mechanical task, but are used, via a generator, to turn wind into electricity.

The US Department of Energy has a lot of good, basic info on wind, which is where I found this overall definition of how things work:

A wind turbine turns wind energy into electricity using the aerodynamic force from the rotor blades, which work like an airplane wing or helicopter rotor blade. When wind flows across the blade, the air pressure on one side of the blade decreases. The difference in air pressure across the two sides of the blade creates both lift and drag. The force of the lift is stronger than the drag and this causes the rotor to spin. The rotor connects to the generator, either directly (if it’s a direct drive turbine) or through a shaft and a series of gears (a gearbox) that speed up the rotation and allow for a physically smaller generator. This translation of aerodynamic force to rotation of a generator creates electricity. (Source: energy.gov)

There are two main types of wind turbines. I was most familiar with the horizontal-axis version. That’s the one that’s usually pictured, and the one used at the wind farm in my area in Upstate New York. The horizontal-axis turbines face the wind. On the other hand, vertical-axis turbines – which include this “egg-beater” type – are omnidirectional.

Wind turbines are land-based or offshore, and distributed or not. What we mean by land-based or offshore is pretty obvious. (One if by land, two if by sea…) And they’re generally aggregated into farms that provide bulk energy. Distributed turbines that are for smaller scale use. They’re “installed on the ‘customer’ side of the electric meter or are installed at or near where the energy they produce will be used.”

Texas is actually an excellent place for wind energy because, with its topography, they have an awful lot of wind. And once they’ve invested in hardening their turbines so they can better operate under harsh weather conditions, they’ll be in great shape to withstand future bouts of awful weather. Wind turbines can obviously stand up to the rigors of cold and snow, as they work well in cold places. How do they do it?

In Canada, where wind turbines can experience icing up to 20% of the time in winter months, special “cold weather packages” are installed to provide heating to turbine components such as the gearbox, yaw and pitch motors and battery, according to the Canadian government. This can allow them to operate in temperatures down to minus 22 degrees Fahrenheit (minus 30 Celsius).

To prevent icing on rotor blades — which cause the blades to catch air less efficiently and to generate less power — heating and water-resistant coatings are used.

One Swedish company, Skellefteå Kraft, which has experimented with operating wind turbines in the Arctic, coats turbine blades with thin layers of carbon fiber which are then heated to prevent ice from forming. Another method used by the company is to circulate hot air inside the blades. (Source: Forbes)

As with so many things in life, there’s a risk-reward calculation at play here.

Meanwhile, my thoughts are with the people in Texas who have been enduring so much. Winters are no trip to the beach in Upstate New York, but we’re pretty well used to it. Sure, there are occasional power outages, but we mostly get through our admittedly brutal winters without having to worry about heat, light, and flushing the toilet.

What’s trending in technology for 2021

Hard to imagine that we’re already into February, but it’s still not too late to take a look at the technology trends for the new year. So here’s my summarization of the trends list created by Anis Uzzaman on Inc. which I recently came across.

Not surprisingly, the first trend is the advances in vaccine development and COVID testing sparked by the pandemic. Testing approaches, while not 100% perfected, have improved, and affordable home testing kits are on the near horizon. Technological breakthroughs are enabling more rapid vaccine development, and the mRNA vaccines now out (from Pfizer and Moderna) represent significant innovation.

Videoconferencing and work-from-home technology have been progressing for years now. But it took COVID to make Zoom a household word: noun, verb, adverb, adjective, participle, gerund… There are many other conferencing and collaboration platforms out there that let groups meet, “create and share content, interact, track projects, train employees, run virtual team-building activities, and more.” I’m hoping that 2021 is the year in which organizations can resume having employees working in person, but the ability to work remotely isn’t going anywhere, and there’s a lot of excellent technology making it all possible.

Contactless delivery and shipping aren’t going anywhere, either. While many of us have gotten use to the delivery guy texting us that our take-out dinner is on the doorstep, a Chinese company, Meituan, is taking things a step further. It’s now begun “using autonomous vehicles to help fulfill grocery orders to customers.”

With people looking to minimize their exposure to COVID, telehealth and telemedicine has taken off, and “telehealth visits have surged by 50 percent compared with pre-pandemic levels.” There’s no substitute for an in-person visit to the doctor, and obviously many health problems can’t be solved remotely. No home MRIs, no setting a broken leg over the Internet. But routine patient chats can take place online, man problems can be monitored remotely, and offerings for “A.I. avatar-based diagnostics, and no-contact-based medication delivery” mean that more and more problems can be taken care of remotely.

Our kids are grown, so my wife and I haven’t had to become home schoolers, but the pandemic has certainly been a boon for online education and e-learning. Classes and even athletic coaching are being conducted using Zoom and other videoconferencing platforms, but there are many elearning platforms, purpose-built for learning (as opposed to business meetings) are coming to the fore, for higher-ed and K-12.

Homeschooling (mostly involuntary) and working from home are compounding the demand for 5G infrastructure, new applications, and utilities that was already coming from smart homes, smart businesses, smart cities, and autonomous mobility. The IoT really is becoming the Internet of Everything. 5G is now deployed in many countries, and expanding rapidly. And 6G is on the horizon.

Building on the development of the last several decades, A.I., robotics, the IoT, and industrial automation all continue to grow rapidly. Post-pandemic,

“As manufacturing and supply chains are returning to full operation, manpower shortages will become a serious issue. Automation, with the help of A.I., robotics, and the internet of things, will be a key alternative solution to operate manufacturing.”

Virtual reality (VR) and augmented reality (AR) technologies usage is rising. With entertainment options diminished due to COVID, interest in VR and AR is naturally increasingly. But “immersive technologies from AR and VR innovations enable an incredible source of transformation across all sectors.” Real estate “house tours”. Remote support. Virtual trade shows. There are a growing number of VR/AR application areas. And the acceleration of VR/AR will, in turn, spur demand for (and the development of) 5G networks.

As demand for fossil-fuel based transportation decreases, micromobility continues to grow. If you’re not familiar with micromobility – and it’s not something I have a lot of interest in – we’re talking about e-bikes and e-scooters. This is mostly a young folks, urban game – an e-scooter isn’t going to work for grocery shopping for the family – but, along with the likes of Uber and Zipcar, they’re part of the growing menu of transportation alternatives.

But cars will continue to be on the menu, as well. And some of them will be autonomous vehicles. The technology isn’t fully here yet, but automakers have been introducing incremental self-driving features for years. (Think automatic braking and hands-free parking.) But autonomous cars will likely be generally available in the coming decade.

In his trends piece, Anis doesn’t introduce anything particularly innovative or game-changing, “just” technology that builds on earlier development. But isn’t that the way most technological advances occur? Anyway, it’s always fun to see what those in the know anticipate for the coming year.