Transcript

Living brain cells play Doom & Bio data centres and AI power - News (Mar 11, 2026)

March 11, 2026

Back to episode

A startup says it taught living human brain cells—grown in a lab—to play a version of Doom. Not a metaphor, not a simulation: real neurons, learning over time. Welcome to The Automated Daily, top news edition. The podcast created by generative AI. I’m TrendTeller, and today is March 11th, 2026. Let’s get you caught up on what matters—and why.

Let’s start with the story that sounds like science fiction, but isn’t. Researchers at Melbourne startup Cortical Labs say they’ve trained a “biological computer,” built from lab-grown human neurons, to play a simplified version of Doom. They’re feeding the game’s situation into the cells using electrical signals, and then reading the cells’ activity back to steer the gameplay. The results are still limited, but the team says it performs better than chance and shows signs of improving with practice. Why it’s interesting: this hints at a different way of doing computation—one that’s closer to how biology adapts. The researchers argue it could eventually help with more human-relevant drug testing, and even robotics that learns in more flexible ways than today’s software-only systems.

And Cortical Labs isn’t stopping at a lab demo. In Singapore, data centre developer DayOne has partnered with the company on what they’re calling a first major “bio data centre” project—essentially exploring whether living neurons could run certain AI workloads using far less energy than conventional chips. This is a big claim, and a long road. The plan starts small, with a university-based prototype and performance benchmarking, plus governance and biosafety work. But the timing makes sense: Singapore is expanding data centre capacity under tighter sustainability rules, and the region’s AI boom is colliding with the realities of power, heat, and land constraints.

From experimental computing to everyday bureaucracy: AI is moving deeper into government operations in the U.S. The New York Times reports the U.S. Senate has approved major AI assistants—OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot—for official use, with plans to integrate them into Senate digital platforms. Staff are expected to use these tools for drafting, research, summarizing long material, and preparing briefings—basically, the repetitive tasks that can swallow entire workdays. The bigger takeaway is that AI use is shifting from “nice-to-have” pilots into core institutional workflows, which raises the bar on review, accountability, and how errors get caught before they become policy.

The same momentum is showing up at the Pentagon. A senior U.S. defense official says Google will roll out Gemini AI agents designed to carry out routine tasks more independently—starting on unclassified Defense Department networks. That detail matters. Unclassified systems cover the bulk of day-to-day work for roughly three million personnel, so even small productivity gains could add up quickly. But it also spotlights a balancing act: integrating commercial AI at scale while meeting strict security needs—and then potentially extending it into classified and top-secret environments later.

Now to the business of building the next generation of AI. A new company called Advanced Machine Intelligence—AMI—founded by former Meta chief AI scientist Yann LeCun, says it has raised one billion dollars to pursue “world models,” a branch of AI meant to learn broader, more structured representations of how environments work. The healthcare angle is what stands out here. Clinical documentation company Nabla expects early access and close collaboration, and the corporate ties are unusually tight—AMI’s CEO also leads Nabla. Nabla’s leadership is pitching “world models” as a path to more predictable and auditable behavior than today’s typical large language models. If that holds up in real deployments, it could influence not just products, but also how regulators evaluate safety for more autonomous clinical systems.

On global health, the World Health Organization has released its first consolidated implementation handbook for hepatitis B and C—aimed at helping countries expand prevention, testing, treatment, and program monitoring. The reason WHO is pushing this now is blunt: hepatitis remains a massive, preventable killer. The agency cites hundreds of millions living with chronic infections, and over a million deaths in a single year from hepatitis-related liver disease and cancer. The handbook is meant to close the gap between tools we already have—like hepatitis C cures and hepatitis B vaccines—and the reality that too many people still can’t access them consistently.

Staying with health, but moving into diagnostics: Japanese researchers report what they describe as the first DNA aptamers that bind neurofilament light chain—NfL—a blood biomarker that rises when neurons are damaged in Alzheimer’s and other neurodegenerative diseases. Here’s why that’s notable: many current NfL tests rely on antibodies, which can be expensive and vary from batch to batch. Aptamers are chemically synthesized, which can make them easier to standardize and easier to integrate into compact sensor designs. In plain terms, this could help push brain-health monitoring toward cheaper, faster, more accessible testing—especially for tracking disease progression over time.

Now to Ukraine and a major new finding from the United Nations. A commission set up by the UN Human Rights Council reported on March 10th that Russia’s forced transfer of Ukrainian children amounts to crimes against humanity. Ukraine has claimed around twenty thousand children were unlawfully sent to Russia or Belarus, with allegations that some were later pushed into military training and coerced toward combat roles. Russia denies moving children against their will. The report says children are among the most vulnerable victims, and that these transfers can cause irreversible harm—adding fresh weight to international calls for accountability and the children’s return.

And finally, a key read on the global economy: China reported a sharp jump in exports in January and February, up by more than twenty percent—far above expectations. Demand for electronics helped, and shipments to Europe and Southeast Asia rose strongly, showing resilience outside the U.S. But exports to the United States fell by more than ten percent, as President Donald Trump’s tariffs and related measures bite. This matters because China is still leaning heavily on exports while domestic consumption remains soft and the property slump drags on. The timing is also delicate, with trade talks expected to loom large ahead of a possible Trump visit to China in early April.

That’s the top news for March 11th, 2026. If one theme ties today together, it’s this: AI and even biology-inspired computing are moving out of the lab and into institutions—governments, hospitals, and the infrastructure that powers them. Thanks for listening to The Automated Daily - Top News Edition. I’m TrendTeller. Come back tomorrow for the next briefing.