Tech News · February 23, 2026 · 10:16

Airlifting a 5‑MW microreactor & Superconductors for data-center power - Tech News (Feb 23, 2026)

A microreactor flies on a C‑17, Microsoft bets on superconducting cables for AI data centers, and autonomous coding agents raise the stakes for safety.

Airlifting a 5‑MW microreactor & Superconductors for data-center power - Tech News (Feb 23, 2026)
0:0010:16

Our Sponsors

Today's Tech News Topics

  1. 01

    Airlifting a 5‑MW microreactor

    — The U.S. Department of Defense flew an unfueled 5‑MW Ward250 microreactor on C‑17 aircraft, testing rapid deployment logistics under the Janus Program. Keywords: microreactor, TRISO fuel, HALEU, C‑17, remote base power.
  2. 02

    Superconductors for data-center power

    — Microsoft is backing high‑temperature superconducting (HTS) cables as a way to move more electricity through cramped AI data centers with lower losses. Keywords: HTS, REBCO tape, liquid nitrogen cooling, grid constraints, hyperscalers.
  3. 03

    AI coding agents go unattended

    — Stripe says over 1,300 weekly pull requests are fully produced by its autonomous ‘minions,’ using isolated devboxes and blueprint-based orchestration. Keywords: coding agents, dev environments, CI loops, guardrails, automation.
  4. 04

    MCP and tool access scaling

    — As Model Context Protocol (MCP) catalogs explode, Cloudflare and Stripe are converging on ‘discoverable tools’ and code-based execution to keep context small and safer. Keywords: MCP, typed SDK, sandboxing, Toolshed, OpenAPI search/execute.
  5. 05

    AI hype meets public skepticism

    — New surveys suggest AI’s narrative is slipping: many people fear AI harms, and firms report limited productivity impact so far, even as leaders promise transformation. Keywords: adoption, productivity, narrative gap, diffusion, skepticism.
  6. 06

    Gemini 3.1 Pro reasoning jump

    — Google previewed Gemini 3.1 Pro, emphasizing multi-step reasoning gains and broader rollout across app, NotebookLM, Vertex AI, and the Gemini API. Keywords: ARC-AGI-2, reasoning, agentic workflows, developer tools, Deep Think.
  7. 07

    Mars rover gets GPS-like navigation

    — NASA upgraded Perseverance with Mars Global Localization, letting the rover self-localize within about 25 cm by matching panoramas to orbital maps. Keywords: autonomy, localization, Jezero Crater, drive planning, robotics software.
  8. 08

    Germany’s new space deterrence push

    — Germany is investing heavily in military space capabilities, including radar reconnaissance and resilient SATCOM, while discussing non-kinetic options like jamming and dazzling. Keywords: Bundeswehr, Iceye, SAR, deterrence, satellite resilience.
  9. 09

    Polygenic embryo selection concerns

    — A new book warns polygenic scores and embryo selection are racing ahead of regulation, with risks of inequality, misleading marketing, and reduced genetic diversity. Keywords: polygenic scores, embryo screening, regulation, race myth, destiny myth.

Sources & Tech News References

Full Episode Transcript: Airlifting a 5‑MW microreactor & Superconductors for data-center power

A full-scale nuclear reactor just took a flight on a military cargo jet—modularized, containerized, and treated almost like standard freight. That one detail tells you a lot about where energy, security, and AI infrastructure are headed. Welcome to The Automated Daily, tech news edition. The podcast created by generative AI. I’m TrendTeller, and today is February-23rd-2026. Let’s unpack what’s driving the next wave of compute, automation, and security—without the hype.

Airlifting a 5‑MW microreactor

First up: energy, because AI doesn’t run on vibes—it runs on megawatts. The U.S. Department of Defense says it has, for the first time, airlifted a complete 5‑megawatt nuclear microreactor using C‑17 Globemaster aircraft. The reactor, called the Ward250, was shipped unfueled, broken into eight modules, and moved from California to Utah for assembly and eventual operation. The exercise—under the Janus Program—was designed to prove the reactor can be transported through a repeatable logistics chain and delivered to remote sites with relatively short runways. The bigger point isn’t just “nuclear goes on a plane.” It’s that dependable power is now treated as a deployable capability—something you can reposition when fuel supply lines are risky, grids are unreliable, or you simply need energy where the mission is.

Superconductors for data-center power

Now zoom out to the civilian side of the same power story: AI data centers are stressing electric grids, and the bottleneck isn’t only generation—it’s also delivery. Even in the U.S., average transmission and distribution losses sit around five percent, and that can be worse in other regions. Microsoft is highlighting an approach that sounds futuristic but is grounded in physics: replacing chunky copper power delivery inside data centers with high‑temperature superconducting cables. Copper heats up and hits practical limits as current rises. High‑temperature superconductors—still cryogenically cold, just not as extreme as older superconductors—can carry current with near-zero resistance. Microsoft has invested in Veir, which is building superconducting conductors using REBCO tape, and pushing a closed-loop liquid nitrogen cooling system. The promise is straightforward: dramatically higher capacity in the same footprint, fewer losses, and potentially fewer substations in the chain. The catch is equally straightforward: cost. Rare-earth materials and cryogenic systems aren’t cheap, so the near-term sweet spot is where space, heat, and voltage drop are already squeezing operators—exactly the scenario in dense AI facilities.

AI coding agents go unattended

Staying with AI infrastructure, but shifting from electrons to tokens: Crusoe is pitching a managed inference service designed to squeeze more throughput and lower latency out of large models. Crusoe’s headline claim is big—up to about ten times faster time-to-first-token and up to five times more tokens per second versus a vLLM baseline in its benchmark configuration. The core idea is a shared, cluster-wide memory fabric for key-value cache—so repeated work doesn’t get repeated, sessions can persist, and requests can route to the most “warm” place in the cluster. As always, benchmark caveats apply, but the direction is clear: inference providers are now competing on systems tricks—caching, routing, batching, speculative decoding—not just on “we have GPUs.”

MCP and tool access scaling

Let’s move from running models to letting models run… work. Stripe has published more detail on its unattended coding agents—internally nicknamed “minions.” The striking metric: more than 1,300 Stripe pull requests per week are now entirely produced by these agents. Humans still review, but in those PRs they’re not writing any of the code. The enabling pieces are less glamorous than the agent itself. Stripe relies on standardized, replaceable “devboxes” on AWS—isolated environments that can be spun up fast, run in parallel, and keep credentials and sensitive access locked down. It also leans on an orchestration pattern it calls “blueprints,” mixing deterministic steps—like linting or pushing changes—with flexible loops like “fix CI failures.” That turns an agent into something closer to a state machine, which matters when you’re chasing reliability instead of clever demos. Stripe is also building out a tool layer using the Model Context Protocol, with an internal hub called Toolshed to make tools discoverable across agent systems—while still keeping default access deliberately narrow for safety.

AI hype meets public skepticism

That MCP point matters, because tool sprawl is becoming the new context-window tax. Cloudflare is arguing that MCP has effectively become the standard way for agents to call external tools—but a huge catalog of tools can eat a model’s context and leave less room for the actual user problem. Cloudflare’s answer is what it calls “Code Mode”: instead of describing thousands of tools to the model, you expose a tiny interface—basically ‘search’ and ‘execute’—and have the model write code against a typed SDK. That code then runs in a sandboxed environment. In Cloudflare’s case, the model can search a pre-resolved OpenAPI spec without loading it into the prompt, and execute authenticated API calls inside a tightly controlled Worker isolate. Cloudflare claims this keeps the MCP footprint roughly constant—even for an API with thousands of endpoints—and reduces risks like prompt injection by limiting what the execution environment can see and do. Between Stripe’s tool hub and Cloudflare’s code-first MCP server, the theme is emerging: agents don’t just need tools—they need tool access that scales without turning every prompt into a phonebook.

Gemini 3.1 Pro reasoning jump

And yet, when you let agents touch real systems, the failure modes get real fast. A report citing Financial Times coverage says AWS experienced at least two minor outages late last year tied to internal AI-driven tooling. In one incident, an internal tool reportedly deleted and then recreated an environment in a way that triggered a lengthy outage—on the order of hours. Another wrinkle: permissions. The impact was reportedly amplified because an engineer using the tool had broader access than expected. The lesson here isn’t “never automate.” It’s that LLM-based automation behaves differently from deterministic infrastructure tools. When systems are probabilistic, postmortems get harder, testing gets fuzzier, and you need tighter guardrails around permissions, change boundaries, and rollback plans.

Mars rover gets GPS-like navigation

All of that collides with a broader narrative problem for the AI industry: the public still isn’t fully buying the sales pitch. One piece making the rounds points out that tech leaders are describing AI as a world-remaking force—sometimes with comparisons as grand as “the new electricity.” But surveys show anxiety is widespread, including a sizable share of people worried about existential outcomes. And on the business side, a major survey of firms found most reported no productivity or employment impact from AI so far. Even OpenAI’s Sam Altman has acknowledged adoption is diffusing more slowly than expected—less because the tech can’t do things, and more because organizations and consumers don’t absorb change on command. Nvidia’s Jensen Huang has also warned that critics may be winning the “battle of narratives.” If you’re wondering what could puncture an AI bubble, it’s not just chips or capital—it’s confidence.

Germany’s new space deterrence push

On the product side of that confidence battle, Google is trying to move the conversation back to capability. The company has previewed Gemini 3.1 Pro, emphasizing multi-step reasoning improvements and pointing to a strong score on the ARC-AGI-2 benchmark. Google is also doing something operationally important: rolling it out broadly across consumer and enterprise surfaces at the same time—Gemini app, NotebookLM, Vertex AI, and the Gemini API. This unified launch approach matters because it reduces the usual gap where developers hear about a model but can’t reliably deploy it where their workflows live. Google’s framing is clear: Gemini isn’t just a chatbot—it’s becoming a problem-solving layer across products, including more agentic behavior and tool use.

Polygenic embryo selection concerns

Now for a quick space block—because autonomy isn’t only a data-center story. NASA has upgraded the Perseverance rover with a software capability that’s basically a Mars-flavored version of GPS. There’s no satellite navigation network on Mars, so rovers typically estimate position using imagery and wheel tracking, and then wait for Earth teams to confirm. Over time, drift accumulates. The new system, Mars Global Localization, matches the rover’s panoramic images to onboard orbital maps and computes its position autonomously—NASA says within about 10 inches, in about two minutes. That could translate into longer daily drives and fewer stops that exist purely because the rover isn’t confident enough about where it is. In the same orbit-adjacent theme, Germany is talking more openly about space as a defense domain. Officials are calling out how dependent civilian life is on satellites—communications, navigation, even basic financial operations—and Germany is budgeting major funding toward reconnaissance and SATCOM upgrades. The messaging also includes non-kinetic options like jamming and dazzling, and a push toward more resilient satellite architectures, not just a handful of exquisite spacecraft.

Finally, a different kind of technology with long shadows: consumer genomics and embryo selection. A new book, ‘What We Inherit,’ argues that commercial genetic tools—especially polygenic scores—could reshape society even if their predictions remain limited. Polygenic scores bundle lots of tiny genetic signals to make probabilistic forecasts for traits and disease risk. The catch is that accuracy varies, performance can drop sharply outside European-ancestry datasets, and selecting for multiple traits at once gets even messier. The authors warn that embryo selection using these scores is creeping in quietly—while public debate focuses more on gene editing. They argue the gains marketed by companies can be small and uncertain, but the societal risks are large: worsening inequality, misleading claims dressed up as science, and the possibility of a wealth-driven “optimized” class built more on access and sorting than on meaningful biological differences. Their bottom line is simple: if this is going to exist, it needs much stricter rules around generation, marketing, and use.

That’s the tech landscape for February-23rd-2026: deployable nuclear power, superconducting cables for AI density, agents writing real production code, and a growing tug-of-war between capability and trust. If you’re building with agents, the through-line today is guardrails—tool access, permissions, sandboxes, and clear handoffs to humans. And if you’re building the infrastructure underneath AI, the constraint isn’t imagination. It’s power, heat, and reliability. Thanks for listening to The Automated Daily, tech news edition. I’m TrendTeller—see you tomorrow.