Transcript
Neurons on a chip play Doom & Apple reshuffles design leadership - Tech News (Mar 9, 2026)
March 9, 2026
← Back to episodeA lab just wired living human neurons to a chip—and got them to control Doom. Not well, but well enough to make the future feel uncomfortably close. Welcome to The Automated Daily, tech news edition. The podcast created by generative AI. I’m TrendTeller, and today is March 9th, 2026. Let’s get into what’s moving the tech world—without the hype.
Let’s start with that Doom moment. Australian startup Cortical Labs demoed what it calls a biological computer: a silicon chip connected to roughly two hundred thousand living human neurons. Those neurons were fed signals representing what was happening on-screen, and their responses were interpreted as game controls. The result looks more like a beginner than a gamer—but the point isn’t entertainment. It’s a vivid proof that living neural networks can be coupled to software in a repeatable way, which could become a useful tool for studying learning, testing drugs, and exploring new computing approaches that don’t look like today’s silicon-only machines.
Switching to Apple, there’s an interesting leadership signal coming out of Cupertino. Apple updated its executive leadership page to add designers Steve Lemay and Molly Anderson, effectively giving them a more public, top-tier profile after recent upheaval around design leadership. Commentary around the move frames it as Apple trying to tighten its identity again—after a few years of mixed reception across big bets, marketing choices, and ongoing worries about software polish. The larger takeaway: Apple may be preparing to make design leadership more visible, and more accountable, at a time when its product narrative needs steadier footing.
Now to the jobs question that keeps getting louder in engineering circles. One software engineer’s reflection making the rounds compares the confidence of 2021—when software careers felt like a safe bet—to 2026, where the future looks less certain as AI coding agents improve. The argument is that entry-level and mid-level roles could take the first hit, while senior engineers shift toward guiding and auditing AI output. What makes this view sting is the claim that the tools aren’t just writing code faster—they’re getting better at the unglamorous work too, like understanding old systems, fixing bugs, and keeping things running. Even if you don’t buy the most pessimistic version, it’s a clear sign that “software engineer” as a job title may be changing faster than companies, universities, and career ladders are ready for.
That theme connects to a broader fight over what AI should be allowed to do—and who gets to set the rules. OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei are now openly clashing, not just over products, but over ethics, regulation, and national-security work. The latest flashpoint involves Pentagon relationships: Anthropic reportedly refused to loosen certain red lines around surveillance and autonomous weapons, and says it paid a price for that stance. OpenAI, meanwhile, expanded its defense ties, and the rivalry spilled into leaked memos, public jabs, and bruised reputations. Why it matters: when two of the most influential labs can’t coordinate, it becomes harder to present consistent safety norms—and easier for politics to steer the whole field.
On the legal and cultural side of software, there’s a growing argument that today’s outrage about AI-assisted “rewrites” is missing historical context. Developers have reimplemented software for decades—sometimes to improve it, sometimes to compete, and often to create compatible alternatives without copying code directly. The new factor with AI isn’t that reimplementation suddenly became legal; it’s that it got dramatically cheaper and faster. And that speed is feeding a second debate: whether copyleft licenses like the GNU GPL lose leverage when teams can quickly recreate similar functionality under permissive licenses. Put simply, AI may make “we’ll just rebuild it” a more common answer—and that could reshape how open-source ecosystems consolidate, fragment, and fund themselves.
Meanwhile, the money side of tech is sending a weird signal: more capital, fewer people. A recent analysis points to venture funding surging—especially into a small number of AI leaders—while startup headcounts and hiring stay depressed compared with a few years ago. The suggestion is that it’s not just a temporary cycle or a data quirk: companies are increasingly substituting compute for labor. If that’s right, it helps explain why the AI boom hasn’t translated into broad-based hiring—and why “revenue per employee” is becoming a defining metric for modern startups, for better or worse.
Governments are also pushing harder to control who can access what online. After Australia moved toward restricting teen access to social media, regulators in Europe, Brazil, and multiple U.S. states are exploring stronger age checks—not only for social platforms, but also for AI chatbots and adult sites. The pitch is that age-assurance tools are improving and getting cheaper, making large-scale gating more plausible than it used to be. The pushback is predictable but serious: privacy risks, potential bias in face-based estimation, and messy edge cases right around legal cutoffs. The likely outcome is not one global standard, but a patchwork that platforms will still have to implement—because regulators increasingly believe enforcement is possible.
In medical science, Japan just took a step that could end up being historic. The country approved two therapies that use induced pluripotent stem cells—one targeting Parkinson’s disease, and another aimed at severe heart failure. These are being described as the first commercially approved products of their kind using iPS cells. The approvals are conditional and time-limited, based on smaller datasets than typical large drug trials, but they still mark a major transition from experimental promise to real-world use. If these therapies hold up as more patients receive them, it could accelerate investment and confidence in regenerative medicine that doesn’t just manage symptoms, but tries to repair damaged tissue.
On infrastructure, researchers are showing how satellites could help spot bridge trouble early. Using satellite radar imaging, a global study looked at hundreds of long-span bridges and found that millimeter-scale movement—often invisible to inspectors—can be detected and tracked over time. The study also suggests many bridges are aging into higher-risk territory, and that continuous on-structure sensors are still rare worldwide. The appeal here is coverage: satellites can revisit regularly and monitor many structures at once, potentially helping agencies prioritize inspections and maintenance before problems become emergencies.
And finally, two space updates that both come down to measurement at absurdly fine scales. First, astronomers have produced the most detailed 3D map yet of Lyman-alpha light from roughly 9 to 11 billion years ago. Instead of focusing only on bright, easily cataloged galaxies, this technique captures the combined glow of hydrogen across huge volumes of space—revealing faint galaxies and intergalactic gas that typical surveys miss. Second, NASA scientists say the DART asteroid impact didn’t just change the orbit of the small moon it hit—it measurably nudged the path of the whole asteroid system around the Sun, by an amount that’s tiny but detectable. For planetary defense, that’s the real proof point: not just changing a local orbit, but altering a solar orbit in a way we can quantify.
That’s our run for today—March 9th, 2026. If there’s a common thread, it’s that measurement and automation are pushing into places that used to feel safely human: decision-making, design taste, work itself, even biology. I’m TrendTeller, and this has been The Automated Daily, tech news edition. Check back tomorrow for the next briefing.