Transcript

AI flagged books, librarians punished & Devin and the agentic coding race - AI News (Mar 27, 2026)

March 27, 2026

Back to episode

A school library pulled nearly 200 books after an AI tool flagged them as “inappropriate”—and the librarian who pushed back reportedly ended up under a safeguarding investigation. That story raises a bigger question: when AI labels content as risky, who’s accountable for the fallout? Welcome to The Automated Daily, AI News edition. The podcast created by generative AI. I’m TrendTeller, and today is March 27th, 2026. Let’s get into what happened, and why it matters—across AI governance, model economics, and the rapidly shifting reality of “agentic” software development.

In the UK, a secondary school in Greater Manchester removed around 200 books from its library after senior staff used an AI tool to flag titles as “inappropriate,” according to Index on Censorship. Reportedly affected books included Orwell’s 1984, Twilight, Michelle Obama’s autobiography, and The Notebook—paired with AI-generated notes citing issues like violence or “mature romantic themes.” The librarian says she was told to remove works not “written for children,” refused, and was then placed under a safeguarding investigation before later resigning. Why this matters: automated screening is starting to look like a shortcut to sweeping restrictions—while pushing career-ending risk onto staff who are expected to interpret, resist, or comply with machine-made rationales.

On the build side of AI, the race for autonomous coding agents keeps accelerating. Cognition, the startup behind “Devin,” is positioning its system as an autonomous software engineer—something that can take a task from idea to shipped code with minimal human involvement. The company says this leads to “software abundance,” with people deciding what to build while AI handles more of the implementation. The bigger picture is competitive pressure: Devin now sits in the same arena as tools like OpenAI’s Codex, Anthropic’s Claude Code, and Cursor. Whoever wins mindshare here doesn’t just sell a tool—they can shape the default workflow for modern software teams.

Not everyone buys the idea that agentic coding reshapes business fundamentals. François Chollet argues that cloning the features of a SaaS app has never been the hard part; distribution, product strategy, and switching costs are. In other words, “more code” doesn’t automatically translate into “more competitive.” He also revisits the AGI debate: scaling helps, but scaling alone doesn’t guarantee the kind of flexible, efficient skill acquisition humans have. Chollet points to benchmarks like ARC as a forcing function—measuring whether systems can reliably adapt to genuinely new tasks, not just perform well on familiar patterns.

We also got two grounded snapshots of what AI-assisted engineering looks like when it’s done carefully. One comes from Reco, which says it rebuilt JSONata into a pure-Go library by leaning on the existing test suite as the source of truth—iterating until the behavior matched, and cutting ongoing infrastructure costs tied to running the JavaScript version elsewhere. Another comes from an “autoresearch” style experiment optimizing LLM inference on Apple Silicon. The headline result wasn’t magical speedups—it was modest gains, and a reminder that many supposed optimizations are noise unless you enforce strong quality gates. The takeaway is practical: AI agents can accelerate refactors and tuning, but only when you constrain them with tests and honest benchmarks.

On model efficiency, Google Research introduced TurboQuant, a technique aimed at shrinking the KV cache—the internal memory that makes long-form generation feasible without recomputing everything. Google claims sizable memory reductions and meaningful speedups without the typical quality trade-offs seen in more aggressive compression. This lands amid a broader trend: quantization is becoming the “make it fit” strategy for both cloud serving and local AI. The key point isn’t the math—it’s the business effect. If memory and bandwidth costs drop, you either serve more users per GPU, or you run larger, more capable models on the same hardware—especially relevant for long-context assistants and on-device AI.

Zooming out, one of the more provocative arguments today is that the open-versus-closed contest isn’t just about benchmark parity—it’s about the shrinking “monetizable spread.” The idea is simple: even if frontier models stay ahead, customers may stop paying a premium once open-weight options are good enough for high-volume, everyday tasks. That debate connects to funding, too. Nvidia-backed Reflection is reportedly discussing a massive raise at a huge valuation, positioned as part of a push to make powerful AI systems more freely available and reusable. If capital keeps flowing into open ecosystems, the pricing and platform assumptions of the biggest closed labs could face real pressure over the next few years.

On security and governance, Anthropic confirmed it’s developing and testing a more powerful model after an accidental leak exposed draft materials describing what sounded like a major capability jump. The reporting also pointed to a content-management misconfiguration that left thousands of unpublished assets accessible. It’s a reminder that in AI, operational security can reveal product strategy—and potential risk—long before an official launch. Meanwhile, OpenAI launched a public Safety Bug Bounty focused on AI-specific abuse scenarios, like prompt-injection-driven data exfiltration or agents taking disallowed actions at scale. And OpenAI also discussed how it uses its Model Spec—essentially a public-facing rulebook of intended behavior—to align teams and invite scrutiny. Put together, it signals a shift: “security” is no longer only about software bugs, but about model behavior under real-world pressure.

Finally, AI’s impact on public institutions and geopolitics keeps sharpening. New York City’s public hospital system says it won’t renew its Palantir contract when it expires, following activist and privacy scrutiny, and plans to move to in-house systems. Even when data is “de-identified,” critics worry it can be pieced back together—and that sensitive health data can become leverage. And in cross-border dealmaking, Chinese authorities reportedly told two co-founders of Manus—an AI startup acquired by Meta—not to leave China while the acquisition is reviewed. Regardless of the legal framing, the message is clear: AI M&A now sits squarely inside national strategic priorities, and that can reshape timelines, risk, and who ultimately controls key talent.

That’s it for today’s AI News edition. The through-line is accountability: when AI filters books, writes code, compresses models, or guides policy decisions, humans still own the consequences—and the incentives around those consequences are getting louder. Links to all stories can be found in the episode notes. I’m TrendTeller, and you’ve been listening to The Automated Daily. See you tomorrow.