Publishers block the Wayback Machine & Mamba-3 shifts AI toward inference - Hacker News (Mar 21, 2026)
Publishers block the Wayback Machine, BYD touts 5‑minute EV charging, Mamba‑3 boosts inference, plus OpenCode, Atuin, FFmpeg, FilmKit, and UI safety.
Our Sponsors
Today's Hacker News Topics
-
Publishers block the Wayback Machine
— EFF says major publishers like The New York Times are blocking Internet Archive crawling, threatening the Wayback Machine, web preservation, and verifiable journalism records amid AI scraping lawsuits. -
Mamba-3 shifts AI toward inference
— Together AI and collaborators released Mamba-3, a state space model optimized for inference latency and throughput, highlighting a broader shift toward deployment-heavy AI workloads like agents and RL. -
Open-source coding agents go desktop
— OpenCode shipped a beta desktop app for macOS, Windows, and Linux, positioning a model-agnostic AI coding agent with privacy-first claims, multi-session workflows, and broad LLM provider support. -
Atuin makes terminal history smarter
— Atuin v18.13 improves shell history search speed and terminal UX, and adds an opt-in AI command helper with safety confirmations and privacy controls—useful for day-to-day developer workflows. -
BYD pushes megawatt EV charging
— BYD claims up to 1,500 kW “Flash Charger” speeds, reframing EV charging as an infrastructure and grid problem as much as a battery problem, despite impressive five-minute top-ups. -
FFmpeg explained for app builders
— A practical overview demystifies FFmpeg’s architecture—how tools map to libraries like libavformat and libavcodec—helping developers integrate decoding without getting lost in the ecosystem. -
FilmKit brings Fujifilm tools to browser
— FilmKit is an open-source, WebUSB-based preset manager and RAW converter for Fujifilm cameras that uses the camera’s own processor, offering a cross-platform, zero-install alternative in beta. -
Molly guards and safer interfaces
— A reflection on “molly guards” connects physical safety covers and software confirmations, and introduces ‘reverse molly guards’—timeouts and defaults that prevent long-running systems from stalling.
Sources & Hacker News References
- → OpenCode launches beta desktop app for its open-source AI coding agent
- → Mamba-3 Debuts as an Inference-Optimized State Space Model with Open-Source Kernels
- → FFmpeg 101: Understanding FFmpeg Tools, Libraries, and a Minimal Demux/Decode Loop
- → BYD’s 1.5-Megawatt EV Charging Nears Gas-Pump Speeds, but Infrastructure Limits Remain
- → Atuin v18.13 boosts search speed, adds Hex PTY proxy, and introduces opt-in shell AI
- → Trigger.dev’s TRQL adds secure, tenant-isolated SQL querying on shared ClickHouse
- → Glossary Explains Japanese Chopstick Etiquette Mistakes and Taboos
- → FilmKit brings browser-based Fujifilm preset management and in-camera RAW conversion via WebUSB
- → Explaining ‘Molly Guards’ and the Case for Reverse Safety Prompts
- → EFF Warns Publisher Blocks on Internet Archive Threaten the Web’s Historical Record
Full Episode Transcript: Publishers block the Wayback Machine & Mamba-3 shifts AI toward inference
Some of the biggest news sites are quietly cutting the Internet Archive off at the knees—and if that sticks, parts of the web’s memory could just… vanish. Welcome to The Automated Daily, hacker news edition. The podcast created by generative AI. I’m TrendTeller, and today is March 21st, 2026. Let’s get into what’s moving fast, what’s breaking, and what it means.
Publishers block the Wayback Machine
First up: the Electronic Frontier Foundation is raising the alarm that major publishers have started blocking the Internet Archive from crawling their sites, which threatens the completeness of the Wayback Machine—especially for news. The backdrop is the escalating fight over AI scraping and copyright. Publishers want leverage, and they’re looking for ways to reduce bulk access to their content. But EFF’s point is blunt: blocking a nonprofit archive doesn’t meaningfully stop AI companies, while it absolutely does weaken a public record that journalists, researchers, courts, and Wikipedia rely on to prove what was published—and when. If archived snapshots get spotty, accountability gets harder. Quiet edits and deleted pages become much easier to deny.
Mamba-3 shifts AI toward inference
Staying in the AI lane, a new model architecture called Mamba-3 just dropped from Together AI and academic and industry collaborators—and the headline is a shift in priorities. Instead of optimizing for training speed, Mamba-3 is tuned for inference efficiency: the real-world cost and latency of generating tokens once a model is deployed. That matters because modern AI work is increasingly dominated by post-training and usage—think agentic coding workflows, tool use, and evaluation loops where models spend their lives generating, not learning. The team is essentially saying: the bottleneck isn’t building the brain, it’s running it at scale. They’re also open-sourcing high-performance GPU kernels to make those claimed speedups more than a paper exercise. If the results hold up across more setups, this is one more nudge toward a future where “faster at serving” is as strategically important as “bigger at training.”
Open-source coding agents go desktop
On the developer workflow front, OpenCode—an open-source AI coding agent—has expanded into a beta desktop app for macOS, Windows, and Linux. The interesting angle here isn’t just “yet another coding assistant.” It’s the model-agnostic posture: bringing your own provider, whether that’s hosted APIs, local models, or even existing accounts like ChatGPT subscriptions or GitHub Copilot. For teams, that flexibility can be the difference between experimenting and standardizing. OpenCode is also leaning into collaboration features—parallel sessions on the same project and shareable session links—plus a privacy-first promise that it doesn’t store your code or context. For anyone working with sensitive repos, that claim is the entire ballgame, assuming the implementation matches the messaging. Either way, the broader story is clear: AI dev tools are moving from “plugin you try” to “workspace you live in.”
Atuin makes terminal history smarter
Another tool that’s evolving fast: Atuin, the shell history and sync utility, just shipped version 18.13. The big win is responsiveness—history search gets a hot in-memory index so it can feel instant even when your command history is enormous. That’s not glamorous, but it’s the kind of speed upgrade you notice every day. It also adds an optional AI command helper that turns plain-English intent into shell commands, with safety friction for risky operations and privacy defaults that limit what context gets shared unless you explicitly allow it. The theme is practical AI: not magical automation, just a faster path from “what do I type?” to “here’s a command you still need to approve.”
BYD pushes megawatt EV charging
Now to hardware and infrastructure: BYD says it has upgraded its EV “Flash Charger” system to deliver up to 1,500 kW—enough, it claims, to jump from roughly 10 to 70 percent in about five minutes on compatible vehicles. If you’ve been following EV charging, you know why this is provocative: it’s approaching the time it takes to buy a coffee, not the time it takes to watch a show. But the real constraint is what comes after the press release. Megawatt-level charging pushes the problem into grid capacity, site upgrades, and often on-site storage to avoid hammering local networks. And there’s a more subtle takeaway: even if five-minute charging becomes common, it may not transform daily ownership, because most charging still happens at home. The “why it matters” is less about convenience for everyone, and more about enabling long-distance driving at scale—if the infrastructure can keep up.
FFmpeg explained for app builders
For the builders in the audience, there’s a solid high-level explainer making the rounds on FFmpeg’s architecture. FFmpeg is one of those ecosystems a lot of software depends on, but many developers only touch through copy-pasted command lines. This piece connects the dots between the familiar tools—ffmpeg, ffplay, ffprobe—and the underlying libraries used inside apps, like the components that handle container formats and the ones that decode and encode audio and video. The reason it’s useful is confidence: once you understand the boundaries—what reads streams, what decodes frames, what converts formats—you can integrate media capabilities into your own program without treating FFmpeg as an untouchable black box.
FilmKit brings Fujifilm tools to browser
A more niche, but genuinely clever open-source launch: FilmKit, a browser-based preset manager and RAW converter for Fujifilm X-series cameras. The standout idea is using WebUSB to talk directly to the camera, and then letting the camera’s own processor do the RAW-to-JPEG conversion. In practice, that could mean a lighter, cross-platform workflow—no heavyweight desktop suite—while still getting results consistent with what the camera would produce. It’s early and labeled beta, with limited device testing so far, but it hints at a broader trend: browsers becoming capable hardware companion apps, especially when open tools can reverse-engineer protocols responsibly and build community-tested compatibility.
Molly guards and safer interfaces
And to close, a small concept with big implications: the “molly guard”—that flip-up plastic cover over an important button, designed to stop accidental activation. It’s a reminder that good systems design often includes intentional friction. In software, we do it with confirmation dialogs and key combos that prevent catastrophic clicks. But the post also points out an opposite pattern: what you might call a reverse molly guard—interfaces that default safely after a timeout instead of waiting forever. That matters in operations: a single prompt that blocks a batch job overnight can be more expensive than an imperfect default choice. Sometimes the safest system is the one that keeps moving—carefully—when nobody is watching.
That’s it for today’s Hacker News roundup—March 21st, 2026. The thread running through a lot of these stories is control: who controls the web’s memory, who controls AI performance costs at inference time, and who controls what data your tools keep or share. Links to all the stories we covered can be found in the episode notes. Thanks for listening—until next time.