Transcript

Artemis II sets distance record & JWST finds odd exoplanet air - Tech News (Apr 7, 2026)

April 7, 2026

Back to episode

Humans just flew farther from Earth than Apollo 13—and watched a solar eclipse from behind the Moon. Welcome to The Automated Daily, tech news edition. The podcast created by generative AI. I’m TrendTeller, and today is April 7th, 2026. Let’s get into what happened, and why it matters.

Starting in space: NASA’s Artemis II crew is on the return leg after a successful Moon flyby, the first crewed lunar mission in more than half a century. During the far-side pass, Orion set a new distance record for a crewed spacecraft, topping Apollo 13’s mark, and the crew came within a few thousand miles of the lunar surface. They also experienced the classic communications blackout behind the Moon—then made the most of the view, spotting an “earthrise” and even seeing a solar eclipse from space. Beyond the headlines, this was a practical rehearsal: life support routines, manual handling, spacesuit procedures, and all the unglamorous realities you need nailed down before committing to a landing mission. Splashdown is expected in the Pacific off San Diego later this week, and the observations are meant to sharpen future landing plans, including work near the lunar south pole.

More space science, but much farther out: the James Webb Space Telescope has an atmospheric puzzle on its hands. Researchers looking at TOI-5205 b—a Jupiter-sized planet hugging a small red dwarf—say the planet’s atmosphere appears unusually poor in heavy elements. That’s surprising because giant planets are usually expected to pick up lots of heavier material as they form. The team still sees signatures consistent with methane and hydrogen sulfide, and after accounting for the host star’s activity, the result held up. If it’s confirmed, it’s another reminder that planet formation is messier than our tidy models—especially around small, active stars.

Now to the AI business race, where the big story is still compute—who has it, who can afford it, and who can lock it up for years. Anthropic says its annualized revenue run rate has climbed sharply since late 2025, and it’s also reporting a surge in enterprise customers spending at a very high level. Alongside that, the company expanded efforts with Google and Broadcom to secure long-term capacity, including a disclosed path to massive TPU-based compute starting in 2027. The takeaway isn’t just that Anthropic is growing; it’s that the frontier-model market is increasingly shaped by long contracts for power, data centers, and specialized chips. In other words: this is becoming an infrastructure industry, not just a software one.

That links directly to the broader ‘compute crunch’ argument making the rounds in the industry. Analysts and engineers are pointing to a familiar pattern: demand rises fast, but new data centers and power delivery don’t appear overnight. And when the newest hardware needs more complex cooling and denser deployments, the practical rollouts can slow even more. If the next year or two really are defined by scarcity, users should expect more rationing—tighter limits, higher prices, and reliability issues—especially during peak demand.

Over at OpenAI, two different stories point in the same direction: AI is moving from a product wave into a policy-and-capital wave. First, Sam Altman released a new policy document arguing that advanced AI is close enough to require updated economic rules and stronger social protections—along with more urgent attention to cyber and biosecurity risks. Second, reporting suggests OpenAI leadership is divided over whether to pursue an IPO quickly and how aggressively to commit to long-term infrastructure spending. If that tension is real, it highlights the central challenge of the moment: the biggest AI labs are trying to build something society wants immediately, while also betting enormous sums on what the next five years will demand.

Zooming in on how AI is changing everyday software work: Vercel says it’s using an AI-driven workflow to auto-approve and merge a large share of low-risk pull requests in a major repository—while reserving human review for sensitive areas like security, payments, and core infrastructure. The interesting bit isn’t the automation itself; it’s the shift in how teams allocate attention. If the system works, reviewers spend less time rubber-stamping, and more time on the changes that can actually hurt you.

Meta, meanwhile, described a different approach to making coding agents useful inside large, messy, real-world codebases. The company built a pre-computed set of internal ‘context guides’—essentially a curated map of how systems fit together—so agents don’t get lost or invent rules that aren’t true. This points to a practical lesson: for many organizations, the path to better AI output isn’t only a smarter model; it’s better, verified context about how the company actually builds software.

On the consumer side, Google released an official iPhone app that lets people run its Gemma models locally on-device. That’s notable because, until recently, ‘local AI’ on phones often meant unofficial demos or third-party wrappers. Running models on the device can be appealing for privacy, latency, and offline use—even if the experience still comes with trade-offs like big downloads and lightweight session features. It’s another sign that AI isn’t only racing forward in cloud data centers; it’s also being squeezed down into personal hardware where it can be tried, tested, and trusted differently.

Security watch: cryptography engineer Filippo Valsorda is urging faster adoption of post-quantum cryptography, arguing that estimates for breaking today’s widely used elliptic-curve systems are tightening in ways that make ‘sometime after 2030’ feel less safe as a planning assumption. The immediate message for organizations is simple: if you’re deploying new systems meant to last—especially anything tied to identity, certificates, or long-lived secrets—this is the moment to prioritize post-quantum migration plans, not just put them in a research bucket.

In courts and policy, a major jury verdict in Los Angeles found that Meta and YouTube harmed a young user through addictive product design, in a case tied to body image issues and severe mental-health outcomes. Commentators are calling it a ‘Big Tobacco’ style moment for social platforms. Whatever happens on appeal, the wider impact could be political: more pressure for age-gating, stronger duty-of-care rules, and tighter scrutiny of engagement mechanics that keep teens scrolling without necessarily making them feel better.

Across the Atlantic, the European Commission proposed a fast-track defence innovation program called AGILE. The goal is to get emerging capabilities—like AI-enabled decision tools, autonomous systems, and quantum tech—tested and fielded far faster than traditional procurement cycles allow. It’s a response to a harsh reality: in modern conflict, iteration can happen in weeks, while bureaucracy can burn years. If AGILE is approved, it could change how startups and smaller suppliers participate in European defence, and reduce reliance on non-European systems for key capabilities.

Robotics check-in: startup Generalist announced a new ‘physical AI’ model it claims can perform a wide range of dexterous, hand-like tasks with production-grade reliability. The big claim here is adaptability—recovering from mistakes and handling disruptions—rather than just repeating a scripted motion. If results like these hold up outside controlled demos, it’s a meaningful step toward robots that can do economically useful work in warehouses, factories, and servicing environments without months of custom programming.

And for a bit of hardware archaeology: a teardown video surfaced of LG’s unreleased ‘Rollable’ phone prototype, a device that was teased years ago before LG exited the phone business. The internal design looks clever—and also a little terrifying from a durability standpoint—with motors, tracks, and moving supports needed to unfurl extra OLED screen area. It’s a good reminder that many ‘future phone’ concepts fail not because they’re impossible, but because they’re too complex, too fragile, or too expensive to manufacture at scale.

Finally, media economics: Nate Silver argues that social media has shifted from being a dominant traffic engine for publishers to something far more marginal—especially compared to direct relationships like email subscriptions. Even if that’s good news for outlet stability, there’s a sting in the tail: platforms may matter less for clicks, but they can still shape what people believe by rewarding outrage, pile-ons, and partisan content. The business incentives may be changing—but the cultural incentives are still very much in play.

That’s the tech landscape for April 7th, 2026: humans pushing back toward the Moon, AI firms battling over power and chips, and governments and courts trying to catch up to the consequences. If you’re building with AI right now, the theme of the day is constraints—compute, context, and control. I’m TrendTeller. Thanks for listening to The Automated Daily, tech news edition. Check back tomorrow for the next sweep of what matters.