Transcript
Solid-state battery fast-charge tests & Affordable automotive lidar for ADAS - Hacker News (Feb 23, 2026)
February 23, 2026
← Back to episodeWhat does it mean when an independent lab says a “solid-state” battery cell can be pushed to 11C charging—then nearly hits 90°C just to prove the point? Welcome to The Automated Daily, hacker news edition. The podcast created by generative AI. I’m TrendTeller, and today is february-23rd-2026. We’ve got a tight mix today: independently measured fast-charging results, a browser engine quietly betting on Rust—helped along by AI coding tools—plus a push to make automotive lidar cheap enough for everyday cars, and a publishing scandal that shows how incentives can warp peer review.
First up, let’s talk batteries—and not in the vague “trust us, it’s revolutionary” way. Finland’s VTT Technical Research Centre ran an independent fast-charging performance test on an energy storage device supplied to them and identified by the customer as a “Donut Solid State Battery V1” cell. The setup is pretty standard for credible validation: a PEC battery tester inside a climate chamber, controlled voltage windows, and a repeatable routine—charge, rest, then discharge—to see what capacity you actually get back. VTT measured a nominal capacity of 26 amp-hours, which works out to roughly 94 watt-hours at a nominal 3.6 volts. The baseline charge procedure was a CC–CV profile to 4.15 volts at 1C, but the fast-charge runs pushed up to 4.3 volts and aimed to deliver a full 26 Ah of charge. The headline is the C-rate testing: 5C—so about 130 amps—and 11C—around 286 amps. At 5C, VTT reports essentially full usable capacity on discharge afterward, about 26 Ah, with peak temperatures that depended heavily on thermal management. When the cell was clamped between two heat sinks, it peaked around 47°C. With only one-sided heat sinking, it went higher—around 61.5°C. The “how fast” part: roughly 0 to 100% in the neighborhood of 12 to 14 minutes, depending on the run. At 11C, things get more intense. With two-sided heat sinking, they hit a peak around 63°C and completed 0 to 100% in 477 seconds—just under eight minutes—with post-charge discharge around 25.9 Ah. With one-sided heat sinking, an initial attempt tripped a 90°C safety cutoff; a later run improved thermal contact by strapping the cell to the heat sink and still reached about 89°C. Even then, the available discharge capacity after charging was only slightly reduced—roughly 98.4 to 99.6% of what they put in. VTT’s conclusion is measured: under these test conditions, 5C charging for more than nine minutes maintained essentially full capacity, and 11C for more than three minutes was achievable with a small drop in available capacity and clear thermal sensitivity to the cooling setup. The report itself is dated February 9th, 2026, and marked confidential—though it’s now being circulated via Donut Lab’s own publishing. Which leads to the second, related story: Donut Lab has launched a site called “I Donut Believe” specifically to publish third-party validation materials and technical documentation, with the pitch that—unless they say otherwise—the results are VTT’s. They’re also promising not just PDFs, but video documentation of procedures and setups. The first featured result is the same fast-charge testing, and Donut Lab highlights an 11C run as confirmation of a 0 to 80% charge in 4.5 minutes. The interesting meta-point here is that they’re leaning into transparency as a product feature: publish the methodology, show the rigs, and let outsiders argue about what the tests do—and don’t—prove. The next batch of results is scheduled for March 2nd, Finland time, so expect the discussion to keep moving.
Staying in hardware, there’s a notable claim from MicroVision: an automotive lidar unit designed for production pricing below 200 US dollars, with an eventual goal around 100 dollars per sensor. That’s a big deal because lidar has already come down in price dramatically—from absurd five-figure systems a decade ago to today’s multi-thousand-dollar units—but it’s still expensive enough that it often gets reserved for premium vehicles or limited autonomous programs. MicroVision’s argument is straightforward: the real unlock isn’t a slightly better sensor, it’s manufacturability and cost discipline from day one. Their unit, called Movia S, is a corner-mounted solid-state lidar using 905-nanometer laser pulses and time-of-flight measurements. Rather than the iconic spinning “bucket on the roof” that tries to see 360 degrees, this is a fixed field-of-view design—MicroVision describes about 180 degrees horizontally—with detection out to roughly 200 meters in favorable conditions. The tradeoff is also clear: with narrower coverage per sensor, you may need three or four lidars distributed around the vehicle to approximate full situational awareness. That introduces calibration and synchronization work, and shifts the problem toward sensor fusion—making sure the car’s perception stack can align, reconcile, and trust multiple partial views. IEEE Spectrum frames this as part of a broader reset: if lidar becomes cheap at scale, it weakens the long-running “lidar is too expensive” argument and forces the industry to debate performance metrics, system safety validation, and design philosophy instead of just price tags.
Now to software engineering—and one of the more pragmatic platform decisions you’ll see: Ladybird is adopting Rust as its memory-safe language for gradually replacing parts of its C++ codebase. Andreas Kling, the founder, says they previously explored Swift, but hit two big obstacles: C++ interoperability wasn’t good enough for their needs, and the non-Apple platform story just wasn’t where it had to be. Rust, meanwhile, had been rejected back in 2024 for a more subtle reason: the web platform model often assumes C++-style object-oriented patterns—deep inheritance trees and designs shaped around garbage collection—that don’t map cleanly to Rust’s strengths. What changed? Essentially, time and tradeoffs. After another year without a better option, the team made a pragmatic call based on Rust’s ecosystem maturity and its safety guarantees—also noting that Firefox and Chromium are moving in the same direction by introducing Rust components. The first major target is Ladybird’s JavaScript engine, LibJS—specifically pieces like the lexer, parser, AST, and bytecode generator. That’s a smart choice because those parts are relatively self-contained and, crucially, they’re heavily testable via test262. What’s especially striking is the development workflow: Kling used Anthropic’s Claude Code and OpenAI’s Codex to assist with a human-directed translation, issuing hundreds of small prompts, then doing adversarial review passes with multiple models. The bar wasn’t “roughly equivalent behavior.” It was byte-for-byte identical output between the C++ and Rust pipelines. The result: about 25,000 lines of Rust in roughly two weeks—work he says would have taken months by hand. Ladybird reports identical ASTs and identical bytecode output, zero regressions across 52,898 test262 tests and 12,461 Ladybird regression tests, and no performance regressions on tracked JS benchmarks. They even browsed the web in a lockstep mode—running both pipelines simultaneously—to verify real-world JavaScript behaves identically. One more nuance I liked: the Rust looks “translated from C++” on purpose, even mimicking certain C++ patterns like register allocation strategies, so the two implementations can coexist safely behind well-defined boundaries. C++ remains the mainline development focus, and the team is asking contributors to coordinate before launching ports so effort doesn’t get stranded as unmergeable side branches.
Switching gears to academia and publishing, there’s a messy episode involving Elsevier quietly retracting 12 economics and finance papers around Christmas 2025—nine on Christmas Eve, then three more two days later. The papers were spread across three journals, and collectively had over 5,000 citations. The common thread is that all of them list Trinity College Dublin finance professor Brian M. Lucey as a co-author. Elsevier’s stated reason is a conflict-of-interest breach: the editor handling review and final publication decisions was Brian Lucey—despite also being a co-author—which they say compromised the editorial process and violated policy. The write-up frames this as effectively bypassing peer review, and points to years of rumors on industry forums. After the retractions, Lucey was reportedly removed from editorial roles across multiple journals, and the story also claims Samuel Vigne—described as a former PhD student and frequent co-author—was removed as editor-in-chief of two of the affected journals. There’s a broader criticism here of Elsevier’s “finance journals ecosystem,” a system for transferring papers between related journals. Critics argue that when editorial networks overlap, it can enable citation stacking and other incentive-driven behavior—especially when impact metrics reward volume and cross-citation. Wiley, meanwhile, reportedly said it investigated Lucey’s activity at another journal and found no issues, but would monitor going forward. The larger takeaway isn’t just one person’s conduct. It’s the reminder that the structure of incentives—impact factor pressure, editorial power concentration, and citation-based prestige—can create predictable failure modes unless governance and transparency are built in.
On the maker and “calm tech” side, there’s a long, candid post about a decade spent building an e-paper family dashboard called Timeframe. The creator’s original goal was simple: keep useful information—calendar, weather, smart-home status—visible without turning the home into a wall of glowing screens. The journey is what makes it valuable. It started as a Magic Mirror built into a medicine cabinet, but it was hard to read in bright Colorado daylight and too bright at night. Then came jailbroken Kindles: great readability, but slow refresh and a lot of custom enclosure work. Software-wise, early versions used a Ruby on Rails app pulling from Google Calendar and Dark Sky, rendering PNG images that the Kindles would fetch on a schedule. For reliability, the project moved to Visionect e-paper displays and ran Visionect’s backend locally via Docker on a Raspberry Pi, updating images every five minutes. They even open-sourced an integration gem. Commercialization attempts ran into the wall you’d expect: hardware cost—around a thousand dollars for a 13-inch unit in 2019—and then a per-device monthly fee for on-prem software that would have forced a subscription model. A pivotal moment came after the Marshall Fire in late 2021, when rebuilding enabled a redesign around a large Boox Mira Pro 25.3-inch e-paper monitor with real-time HDMI updates. That unlocked richer features like a clock, current Sonos track, and near-term precipitation—but also exposed backend scaling limits. So the architecture evolved again: less image generation, Home Assistant as the primary data hub, and a simplified pipeline with scheduled jobs and file-based caching. One clever UI rule: a single “house health” indicator in the top-left—blank means everything’s fine—keeping the display about awareness, not control. The remaining challenge is product reality: the big setup can cost around $2,000 plus a computer to drive it. The author’s exploring cheaper, simpler alternatives, and that’s the key tension for ambient displays—great experience, hard economics.
In open-source gaming, Wildfire Games has released 0 A.D. Release 28, nicknamed “Boiorix,” and it’s notable because it’s their first release without the “Alpha” label. 0 A.D. remains fully free and open source: GPLv2 for code and Creative Commons for the art, with no freemium tricks. The release adds a new faction called the “Germans,” centered on the Cimbri and related tribes, designed as semi-nomadic with supply wagons and wagon encampments that can be fortified. The faction’s tech emphasizes mobility—things like “Wagon Trains” and “Migratory Resettlement.” There are also engine and platform upgrades that matter to players: new on-the-fly font rendering via FreeType, which helps memory use and improves scaling on high-DPI screens; lobby improvements including TLS certificate verification by default; and an upgrade to SpiderMonkey 128, which drops support for older Windows and macOS versions but brings a more modern JS engine foundation. They’ve also published an official Linux AppImage and are coordinating with Snap and Flatpak maintainers. It’s a very “mature project” sort of release: less about one flashy feature, more about tightening the fundamentals and asking for more contributors—especially in areas like video, social media, and web presence.
Finally, a small but weird web operations mystery: a blogger reports that for at least four days, Facebook has been repeatedly requesting only one file from their self-hosted Forgejo instance—/robots.txt—several times per second. The user agent is facebookexternalhit/1.1, which Meta documents as a crawler used to fetch preview metadata when someone shares a link on Facebook, Instagram, or Messenger. The author says the traffic appears to originate from Meta IP ranges, and there’s no sign the crawler is fetching actual pages—just robots.txt, over and over. They shared charts showing roughly 4,000 to over 7,000 requests per hour. It’s mostly benign compared to more aggressive bot scraping, but it raises a fair question: if this is a bug causing an internal loop, how much bandwidth and energy gets burned when it happens at internet scale? Sometimes the most interesting infrastructure stories are the ones where nobody—outside the company—can explain why the machine is stuck doing one pointless thing.
That’s the run for february-23rd-2026. If you want to dig into the raw numbers—battery test conditions, the exact Rust porting approach in Ladybird, or the details behind the Elsevier retractions—links to all stories are in the episode notes. I’m TrendTeller, and I’ll be back tomorrow with another snapshot of what the Hacker News crowd is dissecting.