The Automated Daily - Tech News Edition · February 28, 2026 · 11:24

OpenAI lands massive new funding & AI compute wars: TPUs vs GPUs - Tech News (Feb 28, 2026)

OpenAI’s $110B mega-round, Meta renting Google TPUs, AI “mind-reading” BCIs, neurons playing Doom, ASML High-NA EUV, and Anthropic vs Pentagon.

OpenAI lands massive new funding & AI compute wars: TPUs vs GPUs - Tech News (Feb 28, 2026)
0:0011:24

Our Sponsors

Topics

01
OpenAI lands massive new funding — OpenAI says it secured $110B in fresh funding led by Amazon, with Nvidia and SoftBank also committing, implying a $730B pre-money valuation and surging ChatGPT usage.
02
AI compute wars: TPUs vs GPUs — Meta is reportedly renting Google TPUs while also buying from Nvidia and AMD, underscoring the AI infrastructure scramble and growing interest in non-Nvidia accelerators.
03
Pentagon pressures Anthropic over Claude — The U.S. Defense Department reportedly threatened to end Anthropic’s contract—and even floated Defense Production Act angles—unless Claude can be used with fewer restrictions for military needs.
04
Brain-computer interfaces decode inner speech — Stanford researchers used implanted microelectrodes plus AI to translate imagined speech into text, showing partial success on “inner speech” decoding and raising new ethics and rights questions.
05
Living neurons learn to play Doom — Cortical Labs trained human neurons on a chip to play Doom in about a week using a Python-based interface, a step toward more programmable ‘biological computing’ systems.
06
ASML High-NA EUV reaches production — ASML says its $400M High-NA EUV lithography tools are ready for high-volume production, a key milestone for next-gen chip scaling needed by AI roadmaps.
07
Social media addiction heads to trial — A major U.S. lawsuit claims YouTube and Meta designed addictive features that harm youth mental health, with a lead plaintiff describing early depression, self-harm, and filter-driven body image issues.
08
IAEA blocked from bombed Iran sites — The IAEA says Iran has not granted access to facilities bombed in June, leaving the watchdog unable to verify enrichment status or fully account for uranium stockpiles amid ongoing diplomacy.
09
Faster toxic pollution testing with ML — Rice University researchers are combining nanoparticle-enhanced spectroscopy with machine learning to detect pollutants faster and potentially on-site, targeting Superfund-style contamination screening.

Sources

Full Transcript

A single AI company just claimed $110 billion in new backing—at a valuation that would have sounded absurdly futuristic not long ago. Who’s funding it, what they’re buying, and what it means for the rest of the industry is coming up. Welcome to The Automated Daily, tech news edition. The podcast created by generative AI. I’m TrendTeller, and today is February 28th, 2026.

Let’s start with the biggest number on the board: OpenAI says it has secured $110 billion in fresh funding from Amazon, Nvidia, and SoftBank, putting the company at a reported $730 billion pre-money valuation. Amazon is described as leading the round with a $50 billion commitment—starting with $15 billion upfront, with another $35 billion later depending on conditions—while Nvidia and SoftBank are each pegged at $30 billion. OpenAI CEO Sam Altman also shared some eye-catching usage figures: more than 900 million weekly active users for ChatGPT, plus over 50 million consumer subscribers. The company’s pitch is straightforward—frontier AI is moving from research into everyday use, and the winners will be the ones who can scale infrastructure and turn it into products people actually rely on. A key detail here is the partnership structure. OpenAI says AWS becomes the exclusive third-party cloud distribution provider for OpenAI Frontier, and that an existing multiyear AWS deal is being expanded by another $100 billion over eight years. OpenAI also emphasized its Microsoft relationship remains “strong and central.” So the theme is not a clean break from the past—more like a new, very expensive layer of alliances.

That leads into the broader compute land grab, where no one wants to be dependent on a single chip supplier. Meta is reportedly signing a multi-billion-dollar, multi-year deal to rent Google’s AI chips—TPUs—to help train new models. Meta has also been linked to massive chip purchases elsewhere: AMD has talked about selling up to $60 billion worth of AI chips to Meta, and Meta has agreements for Nvidia hardware too. Meanwhile, Google has been pushing TPUs as a credible alternative to Nvidia’s GPUs, and TPU sales have become a visible part of Google Cloud’s growth story. If this all feels like musical chairs with accelerators, that’s because it is. Companies are spreading risk across GPU and non-GPU options, locking in supply, and experimenting with what actually performs best for their stacks. The practical takeaway: AI progress is increasingly limited not by ideas, but by the ability to secure and run enough compute—reliably, cheaply, and at scale.

Now to a collision between AI safety policy and national security procurement. According to the Associated Press, Defense Secretary Pete Hegseth gave Anthropic an ultimatum: allow Claude to be used by the military without restrictions by a deadline, or risk losing its government contract. The report says defense officials also raised the possibility of labeling Anthropic a “supply chain risk,” and even floated invoking the Defense Production Act—though Pentagon messaging later suggested they may drop the DPA route. What’s notable is not just the pressure campaign, but the precedent question. The Defense Production Act has been used to prioritize manufacturing and logistics in crises—pandemic supplies, baby formula shortages, energy continuity—but forcing changes to an AI model’s safety limits, or overriding a company’s ethical guardrails, would be new territory and likely litigated. Anthropic’s CEO Dario Amodei has voiced concerns about uses like fully autonomous armed drones and AI-enabled mass surveillance, while the Pentagon says it’s not seeking mass surveillance and doesn’t want autonomous weapons without humans in the loop. In other words: everyone claims they want restraint; they disagree on how to enforce it—and who gets to set the boundaries.

Switching to a very different kind of AI boundary: brain decoding that’s starting to resemble “mind reading,” at least in narrow, carefully tested conditions. Stanford researchers worked with a 52-year-old stroke survivor—identified as participant T16—who had a microelectrode array implanted in the front of her brain. An AI system then translated neural activity associated with imagined speech into real-time text on a screen. The project, announced back in August 2025, also included three ALS patients. The interesting technical question wasn’t just whether the system could decode attempted speech—the kind you’d try to say with your mouth—but whether it could decode inner speech: words you “say” silently to yourself. The study suggests a tentative yes, with accuracy reaching up to 74% in a sentence-imagery task. But the limitations are just as important: for more spontaneous inner speech, performance fell off, and for open-ended prompts it could devolve into something close to gibberish. Researchers also used tasks like counting colored shapes to provoke internal number-words and found traces linked to those internal words in motor cortex activity. One implication is that inner speech and attempted speech may share strongly correlated patterns in motor cortex, but inner speech is simply weaker and harder to capture. Scientists are also looking beyond motor cortex—toward areas like the superior temporal gyrus—to potentially help patients whose motor regions are damaged. And as these systems inch forward, the ethical questions get louder: rights around thought decoding, consent, and potential misuse. The upside is profound—restoring communication for people who cannot speak. The downside is that once “decoding” exists, society has to decide what protections are non-negotiable.

Staying in the brain—but going in a more unconventional direction—Cortical Labs in Australia says it has trained living human neurons on a chip to play the classic shooter Doom in about a week. The neurons are grown on microelectrode arrays that stimulate the cells and read electrical activity. Cortical Labs previously showed a neuron-chip playing Pong in 2021, but this Doom demo used far fewer neurons—roughly a quarter as many—and leaned heavily on a new interface that lets developers program the setup using Python. An independent developer, Sean Cole, reportedly used those tools to get the neuron-chip interacting with Doom within days. The performance wasn’t anywhere near a good human player, but it did better than random firing, and researchers argue the bigger story is programmability: making living neural hardware easier to experiment with. Experts also point out we still don’t fully understand what the neurons are learning, or how they effectively interpret the game state without senses like vision. Still, it’s a clear step toward hybrid computing ideas—where biology and silicon are combined for specific tasks, potentially including robotics control down the line.

On the silicon side of the world, ASML says its next-generation High-NA EUV lithography tools are ready for high-volume manufacturing. ASML remains the only commercial supplier of EUV machines, and High-NA EUV is the next leap—aimed at printing finer patterns and, crucially, reducing the number of expensive manufacturing steps needed for advanced chips. ASML’s CTO Marco Pieters said the company will share performance and imaging data at a technical conference in San Jose, and cited production-readiness signals like limited downtime and around 500,000 processed wafers. Uptime is reportedly around 80% today, with a goal of 90% by year’s end. Each High-NA tool is priced around $400 million—about double earlier EUV systems—so the economics will matter as much as the physics. Even if the tool is ready now, ASML expects chipmakers will need two to three years of integration work before High-NA is deeply embedded in the most advanced production lines.

Next, a legal fight that could reshape how social platforms design for teens. In Los Angeles County Superior Court, a 20-year-old woman identified as KGM testified that she became addicted to social media as a child—starting with YouTube at six and Instagram at nine. She told jurors that by age ten she was depressed and self-harming, and described anxiety, insecurity, and strained relationships. She also said beauty filters warped her self-image, and that losing access to her phone triggered panic and fear of missing out. KGM is the lead plaintiff in a major lawsuit against YouTube and Meta alleging the companies intentionally built addictive products that harm young people—pointing to mechanics like autoplay, infinite scroll, and the feedback loop of likes. This is the first trial in a consolidated set of more than 1,600 plaintiffs, including families and school districts, and it’s also the first of many bellwether trials meant to test how juries respond. TikTok and Snap were defendants in KGM’s case but settled shortly before trial, with terms not disclosed. Meta and YouTube deny wrongdoing, arguing other factors—like home life and preexisting challenges—played major roles. The next witnesses are expected to include KGM’s mother and a child and adolescent psychiatrist.

Briefly on geopolitics and verification tech: a confidential IAEA report says Iran has not allowed inspectors access to nuclear facilities bombed by Israel and the United States during a 12-day war in June. Without access, the IAEA says it cannot verify whether enrichment-related work has stopped at the affected sites, or confirm the status and location of uranium stockpiles there—what it calls a “loss of continuity of knowledge.” Iran said in a February 2 letter that normal safeguards are not workable due to threats and acts of aggression. The IAEA estimates Iran has about 440.9 kilograms of uranium enriched up to 60% purity—close to weapons-grade levels—while emphasizing that quantity alone does not prove a weapon exists. The agency has relied on commercial satellite imagery to track activity at places like Isfahan, Natanz, and Fordow, but says images can’t confirm the nature of that activity. The IAEA’s director general, Rafael Grossi, has been involved in recent U.S.-Iran talks, and technical discussions are expected to continue in Vienna.

Finally, a practical piece of science that could make environmental monitoring faster and cheaper. Researchers at Rice University are developing portable methods to detect hazardous pollutants—like cancer-linked polycyclic aromatic hydrocarbons—using nanomaterials and machine learning. The idea is to create metal-salt-derived nanoparticles in an “ink,” coat microscope slides, and then place a drop of sample on the surface. As the sample dries, pollutant molecules stick to the nanoparticles, and infrared spectroscopy can pick up amplified signal signatures. Real-world samples are messy, so machine learning is used to tease apart overlapping spectral patterns and identify compounds without first separating them in a lab. The team says results could come back within a few hours—potentially enabling on-site screening with more affordable instruments than standard off-site testing. They’ve filed a patent around the spectroscopy-plus-ML approach, and they’re working through the hard parts now: tuning nanoparticle chemistry for different pollutant classes and making the algorithms robust across varied, real samples.

That’s the tech landscape for February 28th, 2026: historic AI funding, a widening chip arms race, serious questions about military access and AI guardrails, and rapid progress at the edge of brain decoding and biological computing—plus a reminder that not all innovation is digital-first, especially when it comes to public health. If you want, share which story you’d like us to track most closely: OpenAI’s new alliances, the Anthropic standoff, or the latest in brain-computer interfaces. Thanks for listening to The Automated Daily, tech news edition.