One-shot Alzheimer’s plaque cleanup & AI MRI Alzheimer’s prediction - Tech News (Mar 7, 2026)
March 7, 2026: One-shot Alzheimer’s plaque cleanup, AI chip export clampdown, social media design on trial, Ukraine ground robots, and DART’s orbital nudge.
Our Sponsors
Topics
- 01
One-shot Alzheimer’s plaque cleanup
— Washington University researchers used engineered astrocytes as “super cleaners” to remove amyloid beta in mice, suggesting a potential one-time Alzheimer’s therapy alternative to repeated monoclonal antibody infusions. - 02
AI MRI Alzheimer’s prediction
— Worcester Polytechnic Institute reports an AI model reading MRI scans can predict Alzheimer’s with high accuracy, highlighting hippocampus volume loss and potential earlier detection for patients and clinicians. - 03
AI chips export approval rules
— The Trump administration is considering rules requiring Commerce Department approval for overseas shipments of advanced AI chips, a move that could reshape global supply chains for Nvidia, AMD, and major buyers. - 04
Who governs AI: CEOs or law
— Anthropic’s dispute with the U.S. Defense Department spotlights AI governance tensions, where corporate policies on surveillance and weapons may function like de facto regulation without democratic accountability. - 05
Social media design on trial
— A Los Angeles case aims to treat algorithmic features like infinite scroll and autoplay as product design choices, challenging Section 230 boundaries and potentially forcing platform redesigns for teen safety. - 06
Critical minerals become security issue
— The U.N. warns demand for lithium, cobalt, nickel and other critical minerals could surge by 2030 and 2040, pushing supply chains into the center of geopolitics, trade policy, and conflict-risk debates. - 07
Robot ground vehicles in Ukraine
— Ukraine is expanding weaponized uncrewed ground vehicles as drones widen the battlefield kill zone, raising new questions about partial autonomy, operator control, and future robot-on-robot combat. - 08
DART nudges an asteroid’s orbit
— New research confirms NASA’s DART impact not only altered an asteroid moonlet’s local orbit, but also measurably changed its path around the Sun—an important real-world datapoint for planetary defense. - 09
Flying taxi scale-up in China
— AutoFlight’s large eVTOL prototype signals how China’s “low-altitude economy” could evolve from delivery drones toward passenger aircraft, though safety certification and infrastructure remain major hurdles. - 10
EV charging claims jump forward
— BYD showcased next-generation battery and ultra-fast charging claims meant to reduce range anxiety and charging downtime, potentially pressuring the broader EV market if results hold up in everyday conditions.
Sources
- → https://medicine.washu.edu/news/enhanced-brain-cells-clear-away-dementia-related-proteins/
- → https://abcnews.com/Technology/wireStory/demand-minerals-power-technology-triple-2030-political-chief-130811464
- → https://www.bbc.com/news/articles/c62662gzlp8o
- → https://www.rnz.co.nz/news/world/588914/nasa-s-attempt-to-kick-asteroid-off-course-was-a-success
- → https://www.newsday.com/business/china-electric-aircraft-evtol-matrix-shanghai-f69575
- → https://theconversation.com/how-instagram-addictiveness-lawsuit-could-reshape-social-media-platform-design-meets-product-liability-277066
- → https://techcrunch.com/2026/03/05/us-reportedly-considering-sweeping-new-chip-export-controls/
- → https://www.fastcompany.com/91503415/byd-ev-battery-competes-with-gas-engines?partner=rss
- → https://www.businessinsider.com/anthropic-pentagon-dario-amodei-new-power-struggle-democracy-versus-ai-2026-3
- → https://www.independent.co.uk/news/health/ai-alzheimers-disease-machine-learning-b2933453.html
Full Transcript
Scientists just turned ordinary brain support cells into plaque-eating “super cleaners” that, in mice, kept Alzheimer’s-style buildup from appearing after a single treatment. What that could mean for future therapies is one of our biggest stories today. Welcome to The Automated Daily, tech news edition. The podcast created by generative AI. I’m TrendTeller, and today is March 7th, 2026. Let’s get into what happened—and why it matters.
One-shot Alzheimer’s plaque cleanup
Let’s start with Alzheimer’s research, because we got two developments that rhyme in a useful way: one is about clearing the disease’s hallmark proteins, and the other is about spotting risk earlier. First, researchers at Washington University School of Medicine reported a striking result in Science: they re-engineered astrocytes—cells that normally support neurons—so they recognize and swallow amyloid beta, the protein that forms Alzheimer’s-related plaques. The twist is they borrowed a playbook from cancer therapy: a receptor design that helps immune cells “lock on” to a target. Here, the target is amyloid in the brain. In mouse models, a single injection given before plaques typically form prevented plaque buildup for months. And in older mice already loaded with plaques, that same one-time approach cut plaque levels by about half. The big reason this is turning heads is practicality: today’s anti-amyloid antibody treatments are typically a repeating commitment. A durable, one-and-done strategy—if it ever proves safe and effective in humans—could radically reduce treatment burden. The researchers are also careful to say this is early, and the safety and targeting questions are not optional homework. Still, it’s a notable new direction: instead of repeatedly sending in cleanup crews, you try to upgrade the brain’s own staff. On the detection side, researchers at Worcester Polytechnic Institute say they trained a machine-learning model to predict Alzheimer’s from MRI scans with very high accuracy, by picking up subtle shrinkage patterns across many brain regions. One standout finding: early volume loss in the right hippocampus showed up consistently, and the team also described differences between men and women in where the earliest changes appear. The headline here isn’t that AI “solves” Alzheimer’s—far from it—but that better early warning could buy people time: time to plan, to enroll in studies, and to use treatments when they’re most likely to help.
AI MRI Alzheimer’s prediction
Now to AI policy and power, where two stories point to the same pressure point: who actually gets to decide how advanced AI is used—and where it’s allowed to go. First, Bloomberg reports the Trump administration is weighing draft rules that would require U.S. government approval for shipments of advanced AI chips to basically anywhere outside the United States. If this becomes policy, it would expand oversight from targeted restrictions to something closer to continuous gatekeeping of global sales. Why it’s interesting is the second-order effect: approvals that are slower or unpredictable can push international buyers to redesign plans around non‑U.S. suppliers over time, even if American chips remain best-in-class. For the U.S., that’s a delicate trade: tighten controls to protect security interests, but risk shrinking influence over the very supply chains you’re trying to steer. In parallel, there’s a brewing argument about governance itself. A piece focused on Anthropic describes the company’s dispute with the U.S. Department of Defense as more than contract drama—framing it as a test of whether AI firms can effectively set policy boundaries that elected governments can’t easily override. Anthropic’s CEO has voiced concerns about domestic surveillance and autonomous weapons, and critics respond with a blunt question: if these decisions are made inside boardrooms, what accountability does the public actually have? This isn’t just about one company. Across the industry, stated “red lines” can shift when competition heats up or revenue opportunities expand. So the larger takeaway is that we’re still deciding whether the rules of AI use will come primarily from law and oversight—or from corporate principles that can be rewritten on short notice.
AI chips export approval rules
Staying with accountability, a major U.S. court case is testing a new way to hold social media platforms responsible—without focusing on what users posted. In Los Angeles, a trial is putting Meta and Google under the microscope with an argument that the harm comes from product design: the engagement loops, the endless feeds, the autoplay, the recommendation engines, and the nudges that keep people—especially kids—coming back. The plaintiff says these features helped drive compulsive use that worsened serious mental-health struggles. The legal significance is how the case tries to route around Section 230 protections. Instead of claiming the platforms are liable for third-party content, the claim is essentially: you built a product with known risk, and you didn’t do enough to prevent foreseeable harm. A judge allowed it to reach a jury, and it’s being treated as a bellwether for a much larger set of similar claims. If that approach holds up, it could change the incentives for product teams everywhere. The question would no longer be only “Is the content allowed?” but also “Is the interface itself safe enough, especially for minors?”
Who governs AI: CEOs or law
Next, the geopolitics of the modern gadget—and the modern military. At the U.N. Security Council, the U.N.’s political chief warned demand for critical minerals could surge dramatically over the next decade and beyond, as these materials underpin everything from phones and data centers to energy storage and weapons systems. The meeting cast mineral supply chains as a security issue, not just an economic one. This matters because we’re watching resource dependencies harden into strategy. The backdrop is U.S.-China competition and tighter trade constraints, with governments now talking about diversification and allied sourcing—while countries that actually mine these materials are pushing back, saying “secure supply” can’t mean ignoring governance, corruption, or conflict financing. So the story isn’t just about digging more stuff out of the ground. It’s about whether the next phase of the energy transition can be built without repeating old mistakes: exploitative extraction, fragile supply chains, and incentives that reward shortcuts.
Social media design on trial
On the battlefield, Ukraine’s war continues to preview what modern conflict could look like when robots get pulled down from the sky and onto the ground. Reports describe Ukraine rapidly expanding armed uncrewed ground vehicles—UGVs—that can carry weapons or explosives and operate in environments where it’s increasingly dangerous for soldiers to move. Commanders emphasize that many systems are still only partly autonomous: machines may help navigate or spot targets, but humans make the final call on firing. The why here is grimly practical. Aerial drones have widened the “kill zone,” making traditional movement and resupply far riskier. Combined with manpower strain, that creates pressure to push more tasks onto machines. Russia is also fielding combat UGVs, raising the possibility of robot-on-robot encounters—an escalation not in drama, but in trajectory. As autonomy improves, so does the urgency of the legal and ethical debate around lethal decisions and accountability when something goes wrong.
Critical minerals become security issue
Let’s zoom out—literally—to space, where NASA’s DART mission keeps paying scientific dividends. DART was the 2022 test where NASA intentionally crashed a spacecraft into the small asteroid moonlet Dimorphos to see if a kinetic hit could change its motion. New work in Science Advances says researchers have now confirmed something extra: the collision not only altered Dimorphos’s orbit around its partner asteroid, it also measurably shifted the pair’s path around the Sun. The change is tiny, but that’s the point. Planetary defense is a game of lead time: small nudges, applied early enough, can turn into big misses later. This is also being framed as the first time humanity has measurably altered the solar orbit of a celestial body—a milestone that’s equal parts engineering flex and sober reminder that we’re learning to move objects in space on purpose.
Robot ground vehicles in Ukraine
Finally, two transportation stories that hint at where mobility could be heading—if regulators, infrastructure, and real-world performance keep up. In China, startup AutoFlight showed a large electric vertical takeoff and landing aircraft prototype—essentially a very big, very quiet “drone-like” passenger craft. The demonstration was short, and commercial service is still years away, but it’s a useful signal: China’s push for a so-called low-altitude economy isn’t just about delivery drones. It’s also about building the regulatory and industrial base for future passenger flight at city scale. The hard parts remain the hard parts: certification, safety assurance, air routing, and the infrastructure you need for lots of takeoffs and landings without chaos. On the EV front, BYD unveiled updated battery and fast-charging claims aimed straight at the two pain points people still cite: range anxiety and charging time. The company’s message is that charging could start to feel less like “waiting” and more like a brief stop—at least under ideal conditions. The important caveat, as always, is translation from staged demos to everyday use: weather, charging network quality, battery aging, and real-world driving all decide whether these headline claims become normal life. But even as a direction of travel, it raises competitive pressure across the EV industry.
That’s the tech landscape for March 7th, 2026: brain cells retooled for cleanup, courts rethinking platform responsibility, governments tightening the flow of AI hardware, and robots—and spacecraft—quietly changing what’s possible. If you want to support the show, share this episode with someone who likes tech news without the hype. I’m TrendTeller, and I’ll be back tomorrow with your next briefing from The Automated Daily, tech news edition.