Transcript

SpaceX bets on orbital power & OpenAI goes enterprise via JV - Tech News (May 5, 2026)

May 5, 2026

Back to episode

SpaceX is quietly laying groundwork for something that sounds like science fiction: energy hardware designed for compute in orbit—because in space, power is the real bottleneck. Welcome to The Automated Daily, tech news edition. The podcast created by generative AI. It’s May 5th, 2026, and I’m TrendTeller. Here’s what’s moving across AI, platforms, developer tools, and the infrastructure that’s starting to look a lot like the next tech arms race.

Let’s start with space and energy—because it’s becoming clear that “AI infrastructure” doesn’t stop at the edge of Earth. SpaceX has begun rapid construction of a new facility in Bastrop, Texas, aimed at producing advanced solar cells for space use. This isn’t about rooftop panels—it’s about squeezing as much reliable power as possible out of every gram you launch. The interesting angle is what it’s for: SpaceX is signaling it wants enough on-orbit energy to support far more than communications satellites, with talk tied to future large-scale compute in space. If this goes anywhere, it’s a reminder that the next data-center debate may include not just where to place servers—but where to place power generation.

Staying with unconventional compute infrastructure: Panthalassa, an Oregon-based company, just raised a large new round to push ahead with wave-powered “data centers at sea.” The pitch is straightforward: use ocean energy for electricity, use ocean water for cooling, and send results back via satellites rather than relying on expensive seafloor cabling. Even if the timeline is ambitious, the story fits a broader theme—AI is pressuring the grid, and investors are now willing to bankroll ideas that blend energy generation and compute as a single product.

Now to enterprise AI—where the big shift isn’t clever demos anymore, it’s distribution. OpenAI has finalized a joint venture called The Deployment Company to help businesses implement and scale OpenAI software. The notable part is who’s backing it: major private equity firms, and a structure that effectively comes with a built-in customer channel through large portfolios of companies. This is OpenAI leaning into a classic enterprise play—make adoption easier, turn pilots into rollouts, and lock in recurring revenue before the market gets crowded. It also underlines how AI competition is moving beyond models. The next battles are about sales motion, compliance comfort, and who can turn interest into production systems fastest.

That brings us neatly to funding—and the strange new math of AI valuations. One venture investor argues that billion-dollar-plus “seed” valuations are no longer rare in AI, pointing to dozens of first rounds with eye-watering headline prices. The rationale is that outcomes might be enormous, and early success can depend more on access to scarce compute and top researchers than on gradual revenue milestones. But the same post also lists the ways these bets can go sideways: momentum loss, talent churn, research that never becomes a business, or future capital needs that dilute early investors. The punchline is venture logic in its purest form—many disappointments are acceptable if one outlier becomes a category-defining winner.

On the “mega-round” front, AI startup Sierra has raised a huge new funding round at a much higher valuation than just months ago. Sierra builds AI customer service agents for enterprises and says it’s scaling quickly. What’s interesting here isn’t just the number—it’s the signal. Investors are still paying up for companies that can plausibly become the default vendor inside large organizations, especially in areas like customer support where the return on automation can be immediate and measurable.

Switching to online video: YouTube is testing an AI-powered tool to help creators handle one of the most persistent annoyances on the platform—copyright claims. If a video gets flagged by Content ID, creators can now see a “Create” option that generates several royalty-free instrumental replacements, designed to slot in where the disputed audio was. The practical benefit is obvious: fewer videos get stuck muted or forced into awkward edits. The bigger question is what this does to the ecosystem around creator music. If YouTube can generate acceptable background tracks at scale, it could pressure the market for third-party royalty-free libraries—especially for creators who just need something safe and serviceable.

Now to regulation and platform design: Meta is heading into a major second phase of a New Mexico case focused on child safety. Prosecutors are asking a judge for significant restrictions on Meta’s apps and, crucially, on recommendation systems—the features that decide what content gets shown and how long users stay engaged. After earlier proceedings resulted in major civil penalties, this phase is about whether Meta’s products can be treated as a public nuisance under state law. If the court orders meaningful changes, it could set a template other states try to replicate. And Meta is expected to push back hard, arguing that algorithm restrictions collide with free-speech protections. However it lands, it’s a high-stakes test of how far governments can go in regulating engagement-driven design.

Over in defense tech, the U.S. Department of Defense says it plans to integrate advanced AI capabilities into highly sensitive classified cloud environments, with support from a long list of major tech providers. The headline is simple: AI is moving from experiments into places where decisions are high consequence. The Pentagon is framing this as acceleration—using AI for things like sorting intelligence, running simulations, and assisting planning. Why it matters is equally simple: once AI becomes routine in classified workflows, the debate shifts from “should we use it?” to “how do we control it, audit it, and keep humans meaningfully accountable?”

Here’s a smaller consumer item with outsized day-to-day impact: Apple is reportedly planning an iOS 27 feature that lets users create their own Apple Wallet passes—even for tickets or cards that don’t officially support Wallet. The idea is that you could scan a QR code to generate a pass, or manually build one if needed. It’s not flashy, but it’s a classic Apple move: reduce friction, reduce reliance on random third-party apps, and make Wallet the default home for the clutter of modern life—events, memberships, and gift cards included.

Let’s talk developer productivity—where a “boring” tool can have enormous leverage. Stripe published a behind-the-scenes look at how it developed and rolled out rubyfmt, an autoformatter for Ruby, across what it describes as an enormous internal codebase. The takeaway isn’t just that formatting can be automated—it’s that standardization removes a constant stream of tiny debates in code reviews. That saves time, reduces inconsistency, and makes it easier for engineers to jump between projects without re-learning style rules. This is the kind of engineering work that rarely gets headlines, but quietly changes how fast large teams can ship.

On the database front, Redis’s creator says a new native Array data type has finally landed as a proposal after months of design and stress testing. The goal is to support true indexed arrays efficiently, without wasting memory when indexes get huge. The meta-story here is also worth noting: he describes using AI tools extensively to challenge design decisions, generate code, and expand tests—while still doing the core system thinking and careful review by hand. It’s a realistic snapshot of how AI is changing serious software work: less drudgery, more iteration, and—if you’re disciplined—better testing.

In sports and analytics, Formula One is turning into a public showroom for enterprise AI. Teams have been signing new AI-focused partnerships at a rapid clip, and the motivation is the coming regulation overhaul in 2026. When physical testing is limited and budgets are constrained, simulation and faster iteration become competitive weapons. Race weekends already involve massive scenario exploration to identify strategy options quickly. The open question going forward is governance: if AI partners contribute tools, people, or compute capacity, does that become a loophole around spending limits—or does it eventually normalize into table stakes for everyone?

Now to biology—with a piece of research that reads like a speed boost for industrial biotech. Researchers at the National University of Singapore published work on a platform called LySE, designed to rapidly evolve bacteria to perform complex biochemical tasks—like metabolizing a chemical tied to PET plastics. The key idea is faster, more controllable evolution focused on a targeted set of genes, rather than random changes everywhere. Why it’s interesting is what it unlocks: quicker paths to microbes that can help with plastic upcycling, pollutant breakdown, and other forms of biomanufacturing where progress is often bottlenecked by slow iteration.

Two final perspective pieces to round out the day—starting with jobs. Bank of America’s research team is pushing back on the idea of an AI jobs apocalypse. Their argument is historical: economies adapt, job categories evolve, and “exposed” doesn’t automatically mean “eliminated.” At the same time, they warn that advanced agent-style AI could change the equation, and that policy may need to catch up so the gains don’t concentrate entirely among capital owners. And in the AI-for-biology world, one essay argues the biggest problem isn’t that models are weak—it’s that biology itself doesn’t have clean, modular handoffs the way software does. If your early assumptions are wrong, downstream results can look fine until they fail expensively. The near-term opportunity, the author suggests, may be less about magic drug design and more about unglamorous wins in clinical operations—where saving time has immediate value.

That’s the tech landscape for May 5th, 2026—AI pushing into enterprise and defense, platforms trying to automate their own friction points, and infrastructure experiments stretching from oceans to orbit. If you want one theme to keep in mind today, it’s this: the next stage of AI competition is increasingly about power, distribution, and governance—not just better models. Thanks for listening to The Automated Daily, tech news edition. I’m TrendTeller—see you tomorrow.