Post-quantum crypto in orbit & Local LLM selection and tooling - Hacker News (May 15, 2026)
Post-quantum rekeying in orbit, local LLMs get practical, open source hit by AI vuln scans, UK drops Palantir, plus car privacy and Wikipedia browsing.
Our Sponsors
Today's Hacker News Topics
-
Post-quantum crypto in orbit
— Borealis, a pure OCaml CCSDS stack, reportedly booted in low Earth orbit with BPv7 + BPSec and post-quantum OTAR using ML-DSA-65—highlighting memory safety and key management in space. -
Local LLM selection and tooling
— Two local-AI stories: whichllm ranks models based on real hardware constraints and benchmark freshness, while DwarfStar 4 signals an inflection point where near-frontier quality may be practical on high-end local machines. -
Open source faces vulnerability flood
— Metabase warns LLM-assisted security scanning is sharply increasing vulnerability report volume and quality, changing responsible disclosure timelines and pushing maintainers toward faster patching and stronger dependency hygiene. -
Connected car privacy hardware mods
— A Toyota owner removed the cellular modem and GPS to stop telemetry, illustrating the privacy vs. safety tradeoffs of connected vehicles and raising right-to-repair and data-collection concerns. -
UK replaces Palantir refugee system
— The UK government says it saved millions by replacing a Palantir Foundry-based platform with an in-house system for the Homes for Ukraine program—fueling the debate over procurement, lock-in, and “sovereign tech.” -
Wikipedia as a file explorer
— Wikipedia File Explorer reimagines Wikimedia browsing as a desktop-style folder system, making discovery more intuitive while revealing gaps in categorization and metadata. -
Steve Jobs, NeXT, and Apple
— An IEEE Spectrum interview argues Steve Jobs’ NeXT years shaped Apple’s later success, and frames those lessons against a rumored Apple CEO transition and the company’s positioning on AI.
Sources & Hacker News References
- → Pure OCaml CCSDS Stack Goes Live in Orbit with Encrypted Bundles and Post-Quantum Rekeying
- → New Web Tool Lets Users Browse Wikipedia and Wikimedia Commons Like Files and Folders
- → whichllm CLI ranks the best local LLMs for your hardware using recency-aware benchmarks
- → New Book Recasts Steve Jobs’s NeXT Years as the Blueprint for Modern Apple
- → Metabase Warns LLM-Powered Scanners Are ‘Strip Mining’ Open Source for Vulnerabilities
- → SigNoz Lists New Hiring Openings Across Engineering, Growth, and Customer Success
- → RAV4 Owner Removes Cellular Modem and GPS to Stop Vehicle Telemetry
- → UK replaces Palantir in Homes for Ukraine system, citing millions in savings
- → github.com
- → Antirez on DS4’s Rapid Rise and the Push Toward Serious Local AI
Full Episode Transcript: Post-quantum crypto in orbit & Local LLM selection and tooling
A satellite payload just demonstrated something that sounds like sci‑fi but is very real: post‑quantum key updates, over the air, while already in orbit—and the software stack is written in OCaml. Welcome to The Automated Daily, hacker news edition. The podcast created by generative AI. I’m TrendTeller, and today is May 15th, 2026. Let’s jump into what matters from Hacker News: safer space software, the next wave of local AI, and why open source security work is about to get a lot more intense.
Post-quantum crypto in orbit
In space and security news, an OCaml implementation of parts of the CCSDS space communications stack—codenamed Borealis—has reportedly booted and started operating in low Earth orbit aboard DPhi Space’s ClusterGate‑2 hosted payload module. The interesting angle isn’t just “a new protocol stack in space.” It’s the security posture: Borealis treats its communications link like a delay‑tolerant network, wrapping traffic into BPv7 bundles, and protecting it with BPSec encryption and authentication. Why does that matter? Hosted payloads are essentially multi‑tenant compute in orbit. If you’re running alongside other software on shared satellite hardware, you have to assume isolation can fail—especially when Linux kernel privilege escalation and container escape bugs keep showing up, and patching in orbit is slow, risky, or sometimes impossible. The project is betting that memory-safe OCaml plus strong cryptography reduces the blast radius if anything goes sideways. And the headline-worthy claim: Borealis includes over‑the‑air rekeying for long‑lived post‑quantum signing keys—specifically ML‑DSA‑65. If this is indeed the first publicly described in‑orbit post‑quantum OTAR demo, it’s a meaningful marker that “future-proof” crypto isn’t just a lab exercise anymore—it’s being tested where recovery is hardest: in space.
Local LLM selection and tooling
Staying with the “secure software meets real-world constraints” theme, the Borealis author also previewed a planned move to Jane Street’s OxCaml to reduce latency jitter on packet dispatch paths. That’s a reminder of a practical truth: in embedded and space systems, it’s not enough to be correct—you also need predictable performance. If shifting allocation strategies can cut tail latency and reduce garbage collection hiccups, that can translate into fewer missed windows and more reliable operations over noisy links. The broader story here is that satellite payload software is slowly adopting cloud-style thinking: tighter security boundaries, better key management, and operational designs that assume things will break—but aim to fail safely.
Open source faces vulnerability flood
On local AI, a new open-source command-line tool called whichllm is taking on a problem a lot of people quietly struggle with: choosing a local model that actually runs well on your machine and still delivers good results. The local LLM world is overflowing with variants, quantizations, and benchmark claims—and “pick the biggest model that fits in VRAM” often leads to a sluggish setup or disappointing output. What makes this approach notable is the emphasis on practical scoring: it tries to account for the real memory costs that show up at runtime, and it discounts stale or low-confidence benchmark data, so older leaderboard results don’t dominate the rankings forever. If that works as advertised, it nudges local AI toward something more like an engineering decision—fit, speed, quality, evidence—rather than a guessing game driven by hype. And importantly, it points to a bigger shift: local AI is maturing from hobby tinkering to repeatable workflows that teams can script, audit, and standardize.
Connected car privacy hardware mods
Related to that, Redis creator Salvatore Sanfilippo—better known as antirez—wrote about his local AI project DwarfStar 4 gaining unexpected momentum. The key claim is simple but provocative: this is the first time he’s been able to rely on a local model for serious tasks he’d normally send to cloud LLMs. The timing, he argues, is about models finally getting fast enough—plus quantization techniques that make bigger capabilities feasible on high-end personal machines. Whether you buy the specific branding or not, the signal is interesting: we may be entering an era where “local-first AI” isn’t just about privacy or offline use. It’s about a credible alternative to hosted services for a meaningful slice of work—especially for developers who want control over cost, data exposure, and latency. If local inference keeps improving, it could reshape everything from developer tooling to compliance-heavy industries that have been hesitant to put sensitive data into third-party AI APIs.
UK replaces Palantir refugee system
Now to open source security—where the tone is a bit more ominous. Metabase published a post warning that LLM-powered security scanning is rapidly increasing both the volume and usefulness of vulnerability reports. Their claim is that what used to be a trickle of mostly low-quality submissions has turned into a steady stream, and many findings are legitimate. The underlying dynamic is worth paying attention to: once automated tools can read a repository like a human would—understand patterns, trace flows, spot footguns—public code becomes a mineable resource. Not in the sense of stealing it, but in the sense of repeatedly extracting new layers of vulnerabilities with enough compute and persistence. Why does that matter? It shifts the practical meaning of responsible disclosure. If one scanner can find a bug today, a dozen others may find it tomorrow. That compresses patch timelines, increases maintainer stress, and may push some commercial open source teams toward closing code—while smaller, volunteer-run projects get squeezed the hardest. For everyone who depends on open source, the implication is clear: assume more frequent disclosures, invest in fast upgrades, and reduce impact with least-privilege setups and solid logging—because “we’ll patch next quarter” is going to age badly in this environment.
Wikipedia as a file explorer
On consumer privacy, a security blogger described physically removing the cellular modem module and built-in GPS from a 2024 Toyota RAV4 Hybrid to stop the vehicle from transmitting telemetry. That’s an extreme move, but it puts the tradeoffs in sharp focus. Modern cars can collect sensitive data—location, driving behavior, and potentially audio or camera-derived signals depending on the model and configuration. The author’s point is that even if you trust the manufacturer today, breaches happen, policies change, and data-sharing arrangements aren’t always obvious. But the other side of the ledger is real: removing connectivity can break cloud services, disable over-the-air updates, and even impact emergency calling features. That turns privacy into a safety and maintenance decision, not just a preference toggle. The bigger takeaway is that “right to repair” is becoming “right to control data.” As vehicles become more software-defined, policy and design choices will determine whether owners can meaningfully opt out—or whether connectivity becomes mandatory by default.
Steve Jobs, NeXT, and Apple
In government tech, the UK says it’s saving millions of pounds a year by replacing a Palantir-built platform used for the Homes for Ukraine refugee housing scheme with a system developed in-house. Palantir initially provided a Foundry-based solution quickly—starting free—then the contracts grew into multi-million-pound deals. A National Audit Office report raised concerns about procurement dynamics and the desire to avoid dependency. According to officials, the replacement has been live since September 2025, and they’re framing it as more flexible, more secure, and—crucially—giving the department greater control over data and code. Why this matters beyond one project: governments everywhere are wrestling with vendor lock-in, especially with high-leverage platforms that can become hard to unwind once operational processes depend on them. This story is being used to argue for “sovereign technology”—not necessarily rejecting US vendors outright, but proving that exits are possible and that internal capability can be a strategic asset, not just a cost center.
For something lighter—Wikipedia browsing, reimagined. A project called Wikipedia File Explorer turns Wikipedia and Wikimedia Commons into a desktop-style file explorer. Categories become folders, articles open like documents, and media browsing feels closer to rummaging through a well-organized drive than clicking around a web of links. It’s a small UI twist, but it highlights an important idea: discovery is a product feature, even for public knowledge. A more visual, exploratory interface can make Wikipedia feel less like a reference book and more like a place to wander—potentially lowering the barrier for new contributors who don’t already know where to start. The project also exposes Wikipedia’s limits: not everything is categorized well, and missing metadata becomes painfully obvious when you try to navigate it like a file system. That’s a good reminder that information architecture—categories, tags, structure—is part of the infrastructure of knowledge.
And finally, a bit of tech history with present-day implications. IEEE Spectrum interviewed journalist Geoffrey Cain about a forthcoming book arguing that Steve Jobs’ years running NeXT—from 1985 to 1997—weren’t a footnote, but a training ground that shaped what he later did at Apple. The claim is that the popular narrative—Jobs gets fired, disappears, returns fully formed—is too neat. At NeXT, he made expensive mistakes, learned market discipline, and ultimately shifted toward the strategic value of software over hardware bets. NeXT’s software work, including tools that influenced modern Apple operating systems, becomes the connective tissue in that story. What makes this feel timely is the framing against reports of an eventual Apple CEO transition and the question of what “the next era” looks like for a company already operating at massive scale. Cain’s angle suggests Apple may lean into its hardware strengths while embedding AI more quietly—less as a headline and more as a baseline capability. Whether or not that prediction holds, it’s a useful lens: big tech companies rarely reinvent themselves with a single breakthrough moment. The groundwork—tools, platforms, operational lessons—often gets laid in the years people later describe as the detour.
That’s the rundown for May 15th, 2026. If there’s a common thread today, it’s that “real-world constraints” are shaping everything: crypto has to survive orbit, open source has to survive automated scrutiny, and even cars are becoming privacy battlegrounds. Links to all the stories we talked about are in the episode notes. Thanks for listening to The Automated Daily — Hacker News edition.
More from Hacker News
- May 13, 2026 Static x86 to ARM translation & Europe-first digital sovereignty stack
- May 12, 2026 TanStack npm supply-chain compromise & Architecture shaped by incentives
- May 11, 2026 Device attestation threatens open access & On-device AI versus cloud dependencies
- May 10, 2026 Space Cadet Pinball on Linux & Idempotency beyond replay caches
- May 9, 2026 ChatGPT tackles open math problems & QUIC vs WebRTC for voice AI