Tech News · May 7, 2026 · 11:53

Musk and OpenAI trial drama & Anthropic’s compute spree accelerates - Tech News (May 7, 2026)

Musk’s OpenAI courtroom twists, Anthropic’s massive compute deals, Apple vs AI coding apps, deepfakes, Meta’s Llama lawsuit, and SpaceX’s IPO control plan.

Musk and OpenAI trial drama & Anthropic’s compute spree accelerates - Tech News (May 7, 2026)
0:0011:53

Our Sponsors

Today's Tech News Topics

  1. Musk and OpenAI trial drama

    — OpenAI President Greg Brockman testified that Elon Musk once backed a for-profit shift, then pushed for control—now central to Musk’s lawsuit over OpenAI’s mission and governance.
  2. Anthropic’s compute spree accelerates

    — Anthropic is locking in enormous GPU and cloud capacity, including a full-cluster deal for SpaceXAI’s Colossus 1, highlighting how compute access is becoming the top competitive constraint in AI.
  3. Alphabet vs Nvidia market lead

    — Alphabet is nearing Nvidia in market value as Google Cloud growth and custom AI chips reshape investor expectations around who captures the biggest AI profits.
  4. US pre-release AI safety tests

    — Google, Microsoft, and xAI will voluntarily submit new models to the US Commerce Department’s CAISI for testing, signaling a firmer federal role in AI risk evaluation.
  5. Deepfakes escalate political risk

    — Deepfakes are getting real-time and harder to detect, raising election and everyday impersonation threats while enforcement and public defenses lag behind.
  6. Apple blocks AI coding apps

    — Apple’s App Store rules are colliding with dynamic, AI-generated software, leaving Replit and similar coding apps unable to ship updates and forcing a rethink of what “reviewed software” means.
  7. Chrome Prompt API backlash

    — Google’s Chrome Prompt API is drawing criticism for looking like a Gemini Nano interface packaged as a web standard, with concerns about user consent, privacy, and browser power.
  8. Open-weights models quietly retreat

    — Analysts warn that reduced releases of open-weights models could weaken competition, increase AI pricing power, and concentrate control among a small set of frontier labs and cloud giants.
  9. Meta sued over Llama training

    — A new class-action lawsuit from major publishers and author Scott Turow accuses Meta of using pirated books and journals to train Llama, testing how courts treat “fair use” versus copying from piracy.
  10. AI tools advance chemistry planning

    — EPFL’s Synthegy uses plain-language guidance to steer synthesis planning and reaction reasoning, suggesting LLMs can help scientists choose plausible routes—not just generate structures.
  11. RNA-triggered CRISPR kill switch

    — Researchers demonstrated Cas12a2 as an RNA-triggered, programmable cell-killing system that can target cancer mutations or viral transcripts, opening new doors for selective cell removal if safety and delivery hold up.
  12. Redis arrays and SQLite scale

    — Redis may gain a native array type with grep-like server-side searching, while SQLite’s maintainers argue it may be deployed more than all other databases combined—underscoring how foundational these tools are.
  13. China’s humanoid robot advantage

    — Morgan Stanley says China’s early lead in humanoid robots could boost manufacturing share and exports, echoing the country’s electric-vehicle playbook and raising new geopolitical competition.
  14. SpaceX governance and Starship shift

    — Reports say SpaceX’s IPO structure could heavily restrict shareholder rights, while operationally the company is preparing for fewer Falcon 9 launches as it shifts effort toward Starship.
  15. Google Search adds community context

    — Google is tweaking AI Overviews and AI Mode to show more community perspectives, clearer sourcing, and improved link visibility—an attempt to balance generative answers with trust and click-through.

Sources & Tech News References

Full Episode Transcript: Musk and OpenAI trial drama & Anthropic’s compute spree accelerates

He reportedly tied an AI power struggle to an eighty-billion-dollar plan for a self-sustaining city on Mars—and it’s now being argued in court. Welcome to The Automated Daily, tech news edition. The podcast created by generative AI. I’m TrendTeller, and today is May 7th, 2026. Let’s get into what happened in tech, and why it matters.

Musk and OpenAI trial drama

We’ll start with the most headline-grabbing courtroom story in AI right now. OpenAI President Greg Brockman testified in a California trial that Elon Musk supported the idea of turning OpenAI into a for-profit back in 2017—arguing that a pure nonprofit couldn’t raise the huge sums needed to build advanced AI systems. But Brockman also said the relationship soured when Musk pushed for control if the organization restructured, including what Brockman described as demands for a majority stake. The testimony paints a picture of a tense 2017 meeting where Musk rejected an equity proposal, left abruptly, and threatened to withhold funding until governance terms were settled. All of this matters because Musk is now suing OpenAI, saying it betrayed its original mission. OpenAI, in turn, argues Musk’s motives are more about regret and rivalry. The outcome could influence how OpenAI is governed at a moment when it’s planning extremely large computing investments.

Anthropic’s compute spree accelerates

Staying with the AI power race—compute is turning into the currency that decides who can scale and who can’t. Anthropic says its growth is far outpacing expectations, and that demand for Claude and Claude Code is straining its ability to secure enough chips. To close that gap, Anthropic announced a deal to use the entire capacity of a massive data center in Memphis known as Colossus 1, run by SpaceXAI. The companies are describing it as a rare “all-in” capacity arrangement, and they’re even floating future collaboration around space-based computing concepts. Whether or not that orbital idea goes anywhere soon, the near-term signal is clear: top AI labs are now negotiating like heavy industry—locking in supply at enormous scale because waiting in line is no longer an option.

Alphabet vs Nvidia market lead

And the compute land-grab isn’t just happening through flashy partnerships. One report says Anthropic has also agreed to pay Google an extraordinary amount over several years for cloud capacity and AI chips. Even if the exact figures are debated, the trend is unmistakable: long-term, mega-sized infrastructure commitments are becoming normal, and they’re creating huge revenue backlogs for cloud providers. The bigger question is what this does to the market. These deals can stabilize supply for a few winners, but they also raise the bar for newcomers and intensify concerns about energy use and sustainability as data centers keep expanding.

US pre-release AI safety tests

That infrastructure story is also feeding directly into markets. Alphabet is closing in on Nvidia’s position as the world’s most valuable company, fueled by investor confidence in its AI strategy and rapid Google Cloud growth. The interesting angle here is what the market is rewarding. Nvidia remains central to AI hardware demand, but Alphabet is being valued not just as a tech platform, but as a company that can monetize AI through cloud services—and increasingly through its own chips as well. If Alphabet retakes the top spot, it signals a shift: investors may be favoring companies that can turn AI into recurring enterprise revenue, not just the ones supplying the picks and shovels.

Deepfakes escalate political risk

On the regulation front, the US Department of Commerce is expanding its role as a kind of clearinghouse for safety testing. Google, Microsoft, and Elon Musk’s xAI have agreed to voluntarily submit new AI models for pre-release evaluations run through the Commerce Department’s Center for AI Standards and Innovation. The key point is the direction of travel. Even with a political climate that often prefers lighter-touch rules, there’s mounting pressure—especially around national security and high-stakes misuse—to treat frontier model releases more like major industrial deployments: tested, audited, and documented before the public gets access.

Apple blocks AI coding apps

Now to a threat that’s getting more immediate by the month: deepfakes. Experts are warning that realistic AI-generated audio and video is becoming so easy to produce that laws and everyday defenses can’t keep up. One of the stark examples still being cited is a robocall that sounded like President Joe Biden, aimed at voters ahead of New Hampshire’s 2024 primary. And it’s not just politics—schools and communities are dealing with deepfake incidents involving students and harassment. Researchers also say the next phase is already here: deepfakes showing up live on video calls, where people have historically relied on face-to-face cues. The practical takeaway is boring but necessary: verify through trusted channels, and for personal situations, lean on offline authentication—like pre-agreed family passwords—because your eyes and ears alone are becoming unreliable.

Chrome Prompt API backlash

Apple’s App Store rules are colliding head-on with AI coding tools, and Replit is one of the highest-profile casualties. Its iOS app reportedly hasn’t been able to ship updates since January, after Apple flagged AI coding apps under rules that restrict executing code that changes an app’s functionality. The bigger issue isn’t one company’s update queue—it’s that AI-generated software breaks the old assumption that the “reviewed app” is the same artifact users run. If an app becomes a wrapper that can generate unlimited new behavior at runtime, the traditional model for review, versioning, and accountability starts to crumble. Developers are now openly challenging Apple’s interpretation, and this looks increasingly like a policy fight that could end up in court—and could force app stores to define what “software distribution” means in an era of dynamic, model-assembled apps.

Open-weights models quietly retreat

Browser politics are heating up, too. A web developer critique is gaining traction over Google’s newly shipped Chrome Prompt API. The argument is that it looks less like a vendor-neutral web capability and more like a standardized doorway into Google’s own on-device model, Gemini Nano. Critics say the proposal faced serious objections from other browser makers and standards bodies, yet still shipped. They’re also raising red flags about consent and privacy—like the risk that websites could prompt on-device AI in the background, consuming local resources, and potentially creating new fingerprinting or abuse scenarios. This is one to watch because it’s a classic web-platform tension: when one browser vendor can ship first, it can effectively define the “standard” by momentum rather than agreement.

Meta sued over Llama training

Zooming out from specific platforms, there’s a strategic worry building in the AI ecosystem: the quiet weakening of open-weights model availability. One analysis argues that as major labs tighten access and licensing, the market loses an important counterweight. Open-weights models matter not just for hobbyists—they support on-prem deployments for privacy and compliance, they enable customization, and they create price pressure on API-only providers. If top-tier open options fade, the fear is a more concentrated market where a small number of players can set terms and prices more freely. Even if smaller “good enough” models improve, many techniques still depend on access to strong base models—exactly what may be getting harder to obtain.

AI tools advance chemistry planning

On AI and copyright, a major class-action lawsuit in Manhattan is taking aim at Meta over its Llama models. Author Scott Turow and several big publishers allege Meta used pirated sources—like Library Genesis and Anna’s Archive—to copy and train on millions of books and journal articles without licensing. Meta denies wrongdoing and is expected to argue that training can be fair use. But the piracy allegation is the part that could reshape the case. Courts may be more willing to entertain “transformative” arguments when data is obtained lawfully—but much less forgiving if the pipeline runs through unauthorized copying at scale. The outcome could affect not just Meta, but how every AI lab documents data provenance going forward.

RNA-triggered CRISPR kill switch

Now for research that shows AI helping in a more grounded, practical way. Researchers at EPFL introduced a framework called Synthegy that lets chemists guide molecule synthesis planning using plain-language instructions—things like preferring certain strategies or avoiding particular detours. What’s interesting here is the role shift for large language models. Instead of just generating text or structures, the model is being used to interpret intent and help rank options in a domain where there are often too many plausible routes. In evaluations, chemists frequently agreed with the system’s rankings. If that holds up broadly, it could reduce trial-and-error in drug discovery and materials work, especially by making advanced planning tools easier to steer.

Redis arrays and SQLite scale

In biotech, a CRISPR-related result is drawing attention for its potential as a programmable “kill switch.” Researchers report that an RNA-guided nuclease called Cas12a2 can be set to destroy cells only when a chosen RNA transcript is present. In lab tests, it selectively eliminated cells expressing targets like HPV transcripts, and it was also designed to distinguish a single-letter cancer mutation, showing additive effects with a targeted drug. It’s early, and delivery and safety are still the hard parts, but the big idea is compelling: targeting not just a DNA sequence, but a cell state—defined by what RNA it’s actively making.

China’s humanoid robot advantage

For developers, a couple of database stories stood out today—one about what’s new, and one about what’s everywhere. First, the creator of Redis has proposed adding a native array data type, plus commands to query and scan arrays, including a server-side grep-like search over array values. Separately, SQLite’s maintainers are making a bold claim: SQLite may be deployed more than all other database engines combined, with potentially an almost unfathomable number of database files living across phones, apps, and devices. Whether you buy the exact figure or not, the message is right: some of the most important software in the world is quiet infrastructure, and its reliability and security ripple through daily life.

SpaceX governance and Starship shift

In robotics and industrial strategy, Morgan Stanley argues China could be setting itself up for an electric-vehicle-style advantage in humanoid robots. The idea is that early scale, government procurement, and supply-chain depth could let Chinese firms iterate quickly and flood the market with lower-cost machines as deployments move from demos into factories, universities, and tech parks. The risk, as always, is that a fast buildout can also create gluts and price collapses. But even that downside can accelerate global automation—meaning the competitive impact may be felt worldwide regardless of which companies end up with the best margins.

Google Search adds community context

Finally, two SpaceX stories that both point to consolidation of power—one corporate, one operational. First, a report on SpaceX’s confidential IPO plans suggests a structure that would give Elon Musk sweeping voting control while sharply limiting shareholder rights, including mandatory arbitration and restrictions on class actions. Second, SpaceX’s launch cadence is expected to shift. Falcon 9 is projected to see a gradual decline in launches this year—not due to problems, but because resources are moving toward Starship. Florida infrastructure is being repurposed accordingly, and more Starlink launches are shifting to California. The underlying theme is the same: SpaceX is reorganizing around the next platform, and that transition will reshape both the business and America’s launch operations.

That’s the tech landscape for May 7th, 2026—courtroom battles shaping AI governance, compute deals rewriting the competitive map, and platforms struggling to keep up with software that changes at runtime. If you follow one thread today, make it this: control of infrastructure and distribution is becoming just as important as model quality. Thanks for listening to The Automated Daily, tech news edition. I’m TrendTeller—see you tomorrow.

More from Tech News