Transcript

Musk and OpenAI trial drama & Anthropic’s compute spree accelerates - Tech News (May 7, 2026)

May 7, 2026

Back to episode

He reportedly tied an AI power struggle to an eighty-billion-dollar plan for a self-sustaining city on Mars—and it’s now being argued in court. Welcome to The Automated Daily, tech news edition. The podcast created by generative AI. I’m TrendTeller, and today is May 7th, 2026. Let’s get into what happened in tech, and why it matters.

We’ll start with the most headline-grabbing courtroom story in AI right now. OpenAI President Greg Brockman testified in a California trial that Elon Musk supported the idea of turning OpenAI into a for-profit back in 2017—arguing that a pure nonprofit couldn’t raise the huge sums needed to build advanced AI systems. But Brockman also said the relationship soured when Musk pushed for control if the organization restructured, including what Brockman described as demands for a majority stake. The testimony paints a picture of a tense 2017 meeting where Musk rejected an equity proposal, left abruptly, and threatened to withhold funding until governance terms were settled. All of this matters because Musk is now suing OpenAI, saying it betrayed its original mission. OpenAI, in turn, argues Musk’s motives are more about regret and rivalry. The outcome could influence how OpenAI is governed at a moment when it’s planning extremely large computing investments.

Staying with the AI power race—compute is turning into the currency that decides who can scale and who can’t. Anthropic says its growth is far outpacing expectations, and that demand for Claude and Claude Code is straining its ability to secure enough chips. To close that gap, Anthropic announced a deal to use the entire capacity of a massive data center in Memphis known as Colossus 1, run by SpaceXAI. The companies are describing it as a rare “all-in” capacity arrangement, and they’re even floating future collaboration around space-based computing concepts. Whether or not that orbital idea goes anywhere soon, the near-term signal is clear: top AI labs are now negotiating like heavy industry—locking in supply at enormous scale because waiting in line is no longer an option.

And the compute land-grab isn’t just happening through flashy partnerships. One report says Anthropic has also agreed to pay Google an extraordinary amount over several years for cloud capacity and AI chips. Even if the exact figures are debated, the trend is unmistakable: long-term, mega-sized infrastructure commitments are becoming normal, and they’re creating huge revenue backlogs for cloud providers. The bigger question is what this does to the market. These deals can stabilize supply for a few winners, but they also raise the bar for newcomers and intensify concerns about energy use and sustainability as data centers keep expanding.

That infrastructure story is also feeding directly into markets. Alphabet is closing in on Nvidia’s position as the world’s most valuable company, fueled by investor confidence in its AI strategy and rapid Google Cloud growth. The interesting angle here is what the market is rewarding. Nvidia remains central to AI hardware demand, but Alphabet is being valued not just as a tech platform, but as a company that can monetize AI through cloud services—and increasingly through its own chips as well. If Alphabet retakes the top spot, it signals a shift: investors may be favoring companies that can turn AI into recurring enterprise revenue, not just the ones supplying the picks and shovels.

On the regulation front, the US Department of Commerce is expanding its role as a kind of clearinghouse for safety testing. Google, Microsoft, and Elon Musk’s xAI have agreed to voluntarily submit new AI models for pre-release evaluations run through the Commerce Department’s Center for AI Standards and Innovation. The key point is the direction of travel. Even with a political climate that often prefers lighter-touch rules, there’s mounting pressure—especially around national security and high-stakes misuse—to treat frontier model releases more like major industrial deployments: tested, audited, and documented before the public gets access.

Now to a threat that’s getting more immediate by the month: deepfakes. Experts are warning that realistic AI-generated audio and video is becoming so easy to produce that laws and everyday defenses can’t keep up. One of the stark examples still being cited is a robocall that sounded like President Joe Biden, aimed at voters ahead of New Hampshire’s 2024 primary. And it’s not just politics—schools and communities are dealing with deepfake incidents involving students and harassment. Researchers also say the next phase is already here: deepfakes showing up live on video calls, where people have historically relied on face-to-face cues. The practical takeaway is boring but necessary: verify through trusted channels, and for personal situations, lean on offline authentication—like pre-agreed family passwords—because your eyes and ears alone are becoming unreliable.

Apple’s App Store rules are colliding head-on with AI coding tools, and Replit is one of the highest-profile casualties. Its iOS app reportedly hasn’t been able to ship updates since January, after Apple flagged AI coding apps under rules that restrict executing code that changes an app’s functionality. The bigger issue isn’t one company’s update queue—it’s that AI-generated software breaks the old assumption that the “reviewed app” is the same artifact users run. If an app becomes a wrapper that can generate unlimited new behavior at runtime, the traditional model for review, versioning, and accountability starts to crumble. Developers are now openly challenging Apple’s interpretation, and this looks increasingly like a policy fight that could end up in court—and could force app stores to define what “software distribution” means in an era of dynamic, model-assembled apps.

Browser politics are heating up, too. A web developer critique is gaining traction over Google’s newly shipped Chrome Prompt API. The argument is that it looks less like a vendor-neutral web capability and more like a standardized doorway into Google’s own on-device model, Gemini Nano. Critics say the proposal faced serious objections from other browser makers and standards bodies, yet still shipped. They’re also raising red flags about consent and privacy—like the risk that websites could prompt on-device AI in the background, consuming local resources, and potentially creating new fingerprinting or abuse scenarios. This is one to watch because it’s a classic web-platform tension: when one browser vendor can ship first, it can effectively define the “standard” by momentum rather than agreement.

Zooming out from specific platforms, there’s a strategic worry building in the AI ecosystem: the quiet weakening of open-weights model availability. One analysis argues that as major labs tighten access and licensing, the market loses an important counterweight. Open-weights models matter not just for hobbyists—they support on-prem deployments for privacy and compliance, they enable customization, and they create price pressure on API-only providers. If top-tier open options fade, the fear is a more concentrated market where a small number of players can set terms and prices more freely. Even if smaller “good enough” models improve, many techniques still depend on access to strong base models—exactly what may be getting harder to obtain.

On AI and copyright, a major class-action lawsuit in Manhattan is taking aim at Meta over its Llama models. Author Scott Turow and several big publishers allege Meta used pirated sources—like Library Genesis and Anna’s Archive—to copy and train on millions of books and journal articles without licensing. Meta denies wrongdoing and is expected to argue that training can be fair use. But the piracy allegation is the part that could reshape the case. Courts may be more willing to entertain “transformative” arguments when data is obtained lawfully—but much less forgiving if the pipeline runs through unauthorized copying at scale. The outcome could affect not just Meta, but how every AI lab documents data provenance going forward.

Now for research that shows AI helping in a more grounded, practical way. Researchers at EPFL introduced a framework called Synthegy that lets chemists guide molecule synthesis planning using plain-language instructions—things like preferring certain strategies or avoiding particular detours. What’s interesting here is the role shift for large language models. Instead of just generating text or structures, the model is being used to interpret intent and help rank options in a domain where there are often too many plausible routes. In evaluations, chemists frequently agreed with the system’s rankings. If that holds up broadly, it could reduce trial-and-error in drug discovery and materials work, especially by making advanced planning tools easier to steer.

In biotech, a CRISPR-related result is drawing attention for its potential as a programmable “kill switch.” Researchers report that an RNA-guided nuclease called Cas12a2 can be set to destroy cells only when a chosen RNA transcript is present. In lab tests, it selectively eliminated cells expressing targets like HPV transcripts, and it was also designed to distinguish a single-letter cancer mutation, showing additive effects with a targeted drug. It’s early, and delivery and safety are still the hard parts, but the big idea is compelling: targeting not just a DNA sequence, but a cell state—defined by what RNA it’s actively making.

For developers, a couple of database stories stood out today—one about what’s new, and one about what’s everywhere. First, the creator of Redis has proposed adding a native array data type, plus commands to query and scan arrays, including a server-side grep-like search over array values. Separately, SQLite’s maintainers are making a bold claim: SQLite may be deployed more than all other database engines combined, with potentially an almost unfathomable number of database files living across phones, apps, and devices. Whether you buy the exact figure or not, the message is right: some of the most important software in the world is quiet infrastructure, and its reliability and security ripple through daily life.

In robotics and industrial strategy, Morgan Stanley argues China could be setting itself up for an electric-vehicle-style advantage in humanoid robots. The idea is that early scale, government procurement, and supply-chain depth could let Chinese firms iterate quickly and flood the market with lower-cost machines as deployments move from demos into factories, universities, and tech parks. The risk, as always, is that a fast buildout can also create gluts and price collapses. But even that downside can accelerate global automation—meaning the competitive impact may be felt worldwide regardless of which companies end up with the best margins.

Finally, two SpaceX stories that both point to consolidation of power—one corporate, one operational. First, a report on SpaceX’s confidential IPO plans suggests a structure that would give Elon Musk sweeping voting control while sharply limiting shareholder rights, including mandatory arbitration and restrictions on class actions. Second, SpaceX’s launch cadence is expected to shift. Falcon 9 is projected to see a gradual decline in launches this year—not due to problems, but because resources are moving toward Starship. Florida infrastructure is being repurposed accordingly, and more Starlink launches are shifting to California. The underlying theme is the same: SpaceX is reorganizing around the next platform, and that transition will reshape both the business and America’s launch operations.

That’s the tech landscape for May 7th, 2026—courtroom battles shaping AI governance, compute deals rewriting the competitive map, and platforms struggling to keep up with software that changes at runtime. If you follow one thread today, make it this: control of infrastructure and distribution is becoming just as important as model quality. Thanks for listening to The Automated Daily, tech news edition. I’m TrendTeller—see you tomorrow.