AI News · March 27, 2026 · 7:05

AI flagged books, librarians punished & Devin and the agentic coding race - AI News (Mar 27, 2026)

AI book bans hit UK schools, Anthropic’s next model leaks, Google TurboQuant cuts LLM costs, and Devin fuels the agentic coding race—Mar 27, 2026.

AI flagged books, librarians punished & Devin and the agentic coding race - AI News (Mar 27, 2026)
0:007:05

Our Sponsors

Today's AI News Topics

  1. AI flagged books, librarians punished

    — A Greater Manchester school used an AI tool to label library books “inappropriate,” pulling titles like 1984 and triggering safeguarding fallout. Keywords: AI content screening, censorship, safeguarding, libraries, student access.
  2. Devin and the agentic coding race

    — Cognition is betting big on “Devin,” an autonomous coding agent, as competition heats up with Codex, Claude Code, and Cursor. Keywords: agentic coding, software engineering, enterprise adoption, productivity, jobs.
  3. Chollet: code is not moat

    — François Chollet argues agentic coding won’t suddenly make SaaS cloning a winning strategy, and reiterates that scaling alone isn’t AGI. Keywords: SaaS moats, distribution, switching costs, ARC benchmark, generalization.
  4. AI-driven rewrites and benchmarking reality

    — Real-world AI coding shows up in practice: a Go rewrite of JSONata using tests, and an “autoresearch” style agent chasing small inference speedups with strict quality gates. Keywords: AI refactoring, test suites, benchmarking hygiene, inference performance, Apple Silicon.
  5. TurboQuant and the quantization wave

    — Google’s TurboQuant claims major KV-cache memory cuts and speedups, alongside a broader trend toward practical weight quantization for cheaper serving. Keywords: KV cache, compression, H100, long context, on-device AI.
  6. Open models shrink pricing power

    — A new argument says the key open-vs-closed fight is the shrinking “monetizable spread,” while Nvidia-backed Reflection reportedly seeks a massive round to expand open availability. Keywords: open weights, pricing power, enterprise procurement, valuation, competition.
  7. Leaks, bug bounties, AI rulebooks

    — Anthropic confirmed testing a stronger model after a draft leak, while OpenAI expands safety reporting via a Safety Bug Bounty and clarifies behavior goals in its Model Spec. Keywords: model leaks, CMS misconfig, AI safety, prompt injection, governance.
  8. Public data, surveillance, geopolitics

    — NYC’s public hospitals plan to end Palantir use amid privacy pressure, and China is scrutinizing AI deals by restricting movement of key founders during review. Keywords: patient data, de-identification, public sector tech, regulation, geopolitics.

Sources & AI News References

Full Episode Transcript: AI flagged books, librarians punished & Devin and the agentic coding race

A school library pulled nearly 200 books after an AI tool flagged them as “inappropriate”—and the librarian who pushed back reportedly ended up under a safeguarding investigation. That story raises a bigger question: when AI labels content as risky, who’s accountable for the fallout? Welcome to The Automated Daily, AI News edition. The podcast created by generative AI. I’m TrendTeller, and today is March 27th, 2026. Let’s get into what happened, and why it matters—across AI governance, model economics, and the rapidly shifting reality of “agentic” software development.

AI flagged books, librarians punished

In the UK, a secondary school in Greater Manchester removed around 200 books from its library after senior staff used an AI tool to flag titles as “inappropriate,” according to Index on Censorship. Reportedly affected books included Orwell’s 1984, Twilight, Michelle Obama’s autobiography, and The Notebook—paired with AI-generated notes citing issues like violence or “mature romantic themes.” The librarian says she was told to remove works not “written for children,” refused, and was then placed under a safeguarding investigation before later resigning. Why this matters: automated screening is starting to look like a shortcut to sweeping restrictions—while pushing career-ending risk onto staff who are expected to interpret, resist, or comply with machine-made rationales.

Devin and the agentic coding race

On the build side of AI, the race for autonomous coding agents keeps accelerating. Cognition, the startup behind “Devin,” is positioning its system as an autonomous software engineer—something that can take a task from idea to shipped code with minimal human involvement. The company says this leads to “software abundance,” with people deciding what to build while AI handles more of the implementation. The bigger picture is competitive pressure: Devin now sits in the same arena as tools like OpenAI’s Codex, Anthropic’s Claude Code, and Cursor. Whoever wins mindshare here doesn’t just sell a tool—they can shape the default workflow for modern software teams.

Chollet: code is not moat

Not everyone buys the idea that agentic coding reshapes business fundamentals. François Chollet argues that cloning the features of a SaaS app has never been the hard part; distribution, product strategy, and switching costs are. In other words, “more code” doesn’t automatically translate into “more competitive.” He also revisits the AGI debate: scaling helps, but scaling alone doesn’t guarantee the kind of flexible, efficient skill acquisition humans have. Chollet points to benchmarks like ARC as a forcing function—measuring whether systems can reliably adapt to genuinely new tasks, not just perform well on familiar patterns.

AI-driven rewrites and benchmarking reality

We also got two grounded snapshots of what AI-assisted engineering looks like when it’s done carefully. One comes from Reco, which says it rebuilt JSONata into a pure-Go library by leaning on the existing test suite as the source of truth—iterating until the behavior matched, and cutting ongoing infrastructure costs tied to running the JavaScript version elsewhere. Another comes from an “autoresearch” style experiment optimizing LLM inference on Apple Silicon. The headline result wasn’t magical speedups—it was modest gains, and a reminder that many supposed optimizations are noise unless you enforce strong quality gates. The takeaway is practical: AI agents can accelerate refactors and tuning, but only when you constrain them with tests and honest benchmarks.

TurboQuant and the quantization wave

On model efficiency, Google Research introduced TurboQuant, a technique aimed at shrinking the KV cache—the internal memory that makes long-form generation feasible without recomputing everything. Google claims sizable memory reductions and meaningful speedups without the typical quality trade-offs seen in more aggressive compression. This lands amid a broader trend: quantization is becoming the “make it fit” strategy for both cloud serving and local AI. The key point isn’t the math—it’s the business effect. If memory and bandwidth costs drop, you either serve more users per GPU, or you run larger, more capable models on the same hardware—especially relevant for long-context assistants and on-device AI.

Open models shrink pricing power

Zooming out, one of the more provocative arguments today is that the open-versus-closed contest isn’t just about benchmark parity—it’s about the shrinking “monetizable spread.” The idea is simple: even if frontier models stay ahead, customers may stop paying a premium once open-weight options are good enough for high-volume, everyday tasks. That debate connects to funding, too. Nvidia-backed Reflection is reportedly discussing a massive raise at a huge valuation, positioned as part of a push to make powerful AI systems more freely available and reusable. If capital keeps flowing into open ecosystems, the pricing and platform assumptions of the biggest closed labs could face real pressure over the next few years.

Leaks, bug bounties, AI rulebooks

On security and governance, Anthropic confirmed it’s developing and testing a more powerful model after an accidental leak exposed draft materials describing what sounded like a major capability jump. The reporting also pointed to a content-management misconfiguration that left thousands of unpublished assets accessible. It’s a reminder that in AI, operational security can reveal product strategy—and potential risk—long before an official launch. Meanwhile, OpenAI launched a public Safety Bug Bounty focused on AI-specific abuse scenarios, like prompt-injection-driven data exfiltration or agents taking disallowed actions at scale. And OpenAI also discussed how it uses its Model Spec—essentially a public-facing rulebook of intended behavior—to align teams and invite scrutiny. Put together, it signals a shift: “security” is no longer only about software bugs, but about model behavior under real-world pressure.

Public data, surveillance, geopolitics

Finally, AI’s impact on public institutions and geopolitics keeps sharpening. New York City’s public hospital system says it won’t renew its Palantir contract when it expires, following activist and privacy scrutiny, and plans to move to in-house systems. Even when data is “de-identified,” critics worry it can be pieced back together—and that sensitive health data can become leverage. And in cross-border dealmaking, Chinese authorities reportedly told two co-founders of Manus—an AI startup acquired by Meta—not to leave China while the acquisition is reviewed. Regardless of the legal framing, the message is clear: AI M&A now sits squarely inside national strategic priorities, and that can reshape timelines, risk, and who ultimately controls key talent.

That’s it for today’s AI News edition. The through-line is accountability: when AI filters books, writes code, compresses models, or guides policy decisions, humans still own the consequences—and the incentives around those consequences are getting louder. Links to all stories can be found in the episode notes. I’m TrendTeller, and you’ve been listening to The Automated Daily. See you tomorrow.