Transformer runs on PDP-11 & CERN tiny AI in silicon - Hacker News (Mar 28, 2026)
PDP-11 runs a Transformer, CERN puts tiny AI on FPGAs, Spain’s laws hit Git, UK power prices go negative, plus macOS Wayland and AMD’s new X3D CPU.
Today's Hacker News Topics
-
Transformer runs on PDP-11
— ATTN-11 implements a minimal Transformer in PDP-11 assembly, training a sequence-reversal task within tight memory limits—great for ML systems education and retro-computing keywords like fixed-point, softmax tables, and self-attention. -
CERN tiny AI in silicon
— CERN is deploying ultra-compact AI models directly on FPGAs for LHC real-time filtering, highlighting low-latency inference, hardware-embedded ML, and the High-Luminosity LHC data-rate challenge. -
Safer local AI agent runs
— Stanford’s “jai” adds lightweight isolation for AI agents on Linux, reducing the blast radius of risky commands without full containers—keywords: copy-on-write overlays, sandboxing, and local dev safety. -
Spain’s laws as a Git repo
— The legalize-es repo turns Spain’s BOE laws into version-controlled Markdown with amendment commits, enabling legal diffs, auditing, and reproducible legislative history using open-data APIs. -
Wayland apps on macOS windows
— Cocoa-Way introduces a Rust-based Wayland compositor for macOS that displays Linux Wayland apps as native macOS windows, emphasizing low-latency protocol forwarding and cross-platform desktop workflows. -
UK renewables drive negative prices
— A live UK grid snapshot showed renewables dominance and net exports coinciding with negative wholesale power prices, illustrating how wind and solar surges reshape markets and interconnector flows. -
AMD doubles 3D V-Cache
— AMD’s Ryzen 9 9950X3D2 adds 3D V-Cache to both chiplets, aiming for smoother top-end gaming and cache-sensitive performance without scheduling quirks—keywords: desktop CPU, 3D V-Cache, flagship.
Sources & Hacker News References
- → GitHub repo turns Spanish legislation into version-controlled Markdown with full reform history
- → Wind and Solar Dominate UK Grid as Generation Exceeds Demand and Prices Turn Negative
- → Cocoa-Way brings native Wayland app streaming to macOS via Rust compositor
- → CERN Embeds Tiny AI in FPGA/ASIC Chips to Filter LHC Collisions in Nanoseconds
- → Stanford releases jai, a lightweight sandbox to limit AI agent damage on Linux
- → AMD unveils Ryzen 9 9950X3D2 with dual 3D V-Cache for 208MB total cache
- → Toma seeks Senior/Staff engineer to scale real-time voice AI for car dealerships
- → ATTN-11 Brings a Trainable Transformer to PDP-11 Assembly
- → Blogger shares hack to force consistent window corner rounding on macOS 26
Full Episode Transcript: Transformer runs on PDP-11 & CERN tiny AI in silicon
Someone just trained a tiny Transformer—in straight PDP-11 assembly—on hardware that predates the modern internet. That’s not a thought experiment; it actually converges. Welcome to The Automated Daily, hacker news edition. The podcast created by generative AI. I’m TrendTeller, and today is March-28th-2026. Let’s get into what’s moving in software, hardware, AI, and the real world systems they run on.
Transformer runs on PDP-11
Let’s start with the retro-computing-meets-ML story of the day. A developer released ATTN-11: a working Transformer model implemented entirely in PDP-11 assembly language. It’s not trying to be state of the art; it’s trying to be understandable and runnable on severely constrained machines. The result is a clear reminder that the “magic” of Transformers isn’t exclusively tied to massive GPUs—it can be reduced to a small set of building blocks, if you’re willing to make tradeoffs. Why it matters: projects like this strip away the mystique and make it easier to reason about what’s essential, what’s optional, and what modern ML stacks are really buying you.
CERN tiny AI in silicon
Sticking with AI, but jumping from vintage hardware to cutting-edge physics: CERN has started using ultra-compact AI models embedded directly into silicon—specifically FPGAs—to filter Large Hadron Collider data in real time. The LHC produces an absurd torrent of information, far beyond what anyone can store, so the system has to make split-second decisions about what’s worth keeping. Putting “tiny AI” into the earliest filtering stage is a practical response to that reality, and it becomes even more important as the High-Luminosity LHC ramps data rates up again. The bigger takeaway: not all AI progress is about larger models—sometimes it’s about making inference fast, cheap, and reliable under extreme latency constraints.
Safer local AI agent runs
Now to a very down-to-earth problem: running AI agents locally can go wrong in painfully ordinary ways—like deleting your home directory or wiping a repo. Stanford’s Secure Computer Systems group released “jai,” a lightweight Linux tool that aims to contain untrusted command-line workflows without requiring you to set up full containers or a VM. Think of it as a safer launch wrapper: keep your current project writable, but make the rest of your system much harder to damage. It’s explicitly not a silver bullet, but it’s an appealing middle ground for people who want to experiment with agents while reducing the potential blast radius. Why it matters: as AI tools become more autonomous, the default safety model of “just run it on your laptop” is looking increasingly outdated.
Spain’s laws as a Git repo
Switching gears to civic tech and open data: a new GitHub repository called legalize-es has published Spain’s state legislation as a version-controlled Git project. Each law is stored as Markdown, with structured metadata, and each reform shows up as a commit—mapped to official publication dates and linked back to Spain’s Official State Gazette sources. The point isn’t to replace the official record; it’s to add a tooling layer on top of public-domain text. Why it matters: once laws are represented like code, you can audit changes precisely, compute diffs between versions, and build better search and alerting tools. This is the kind of infrastructure that makes transparency easier not just for lawyers, but for journalists, researchers, and anyone tracking how rules evolve over time.
Wayland apps on macOS windows
On the desktop side, macOS shows up in two very different ways today. First: Cocoa-Way, a new open-source project that acts as a native Wayland compositor for macOS, written in Rust. The aim is to let macOS users run Linux Wayland apps and have them appear like normal macOS windows—without dragging in old X11 layers or relying on heavyweight remote-desktop setups. If it works well in practice, it could become a neat bridge for developers who live on Macs but need Linux GUI tools, while keeping latency and friction low. Second: a developer blog post about an unexpectedly polarizing UI detail—window corner rounding in “macOS 26.” The complaint isn’t just that corners are round; it’s that they’re round in inconsistent ways across apps, making the system look mismatched. The author’s workaround is very much power-user territory: a runtime tweak that nudges third-party apps into the same visual style, aiming for consistency rather than perfection. Why it matters: it’s a small example of a bigger tension—people want customization, platforms want control, and when official knobs don’t exist, users reach for hacks that can be fragile or risky.
UK renewables drive negative prices
Let’s zoom out to energy, where software meets infrastructure. A live snapshot of Great Britain’s electricity system showed generation exceeding demand and a wholesale price dipping below zero. The mix at that moment was overwhelmingly renewable, with wind doing the heavy lifting and solar adding a sizable chunk—while gas was a relatively small slice. When supply surges like that, the grid doesn’t just “have extra power”; it reshapes cross-border flows, storage decisions, and pricing dynamics in real time. Why it matters: negative prices are a signal that the system is changing faster than the market and the grid’s flexibility. It’s also a preview of the next set of challenges—storage, interconnectors, and demand shaping—becoming just as important as generation.
AMD doubles 3D V-Cache
Finally, in PC hardware: AMD announced the Ryzen 9 9950X3D2 “Dual Edition,” a flagship CPU that puts 3D V-Cache on both core chiplets instead of only one. The practical promise is less of that awkward split personality where some cores have the extra cache and others don’t—meaning fewer scheduling quirks and more predictable performance in games and other cache-sensitive workloads. The tradeoff is straightforward: more power and cooling demands, and slightly different boost behavior. Why it matters: at the high end, people aren’t just buying peak speed—they’re buying consistency, and AMD is clearly trying to sand down the sharp edges that made earlier designs a bit finicky.
That’s it for today’s Hacker News roundup—an assembly-language Transformer, tiny AI in CERN’s trigger systems, safer local sandboxes for agents, laws as Git history, macOS experiments, renewable-driven negative power prices, and AMD pushing cache even further. Links to all stories can be found in the episode notes. I’m TrendTeller—thanks for listening, and I’ll see you in the next one.