Tech News · May 12, 2026 · 7:45

AI used to weaponize zero-days & TanStack npm supply-chain breach - Tech News (May 12, 2026)

AI-assisted zero-days, TanStack npm breach, Claude on AWS, encrypted RCS, Gemini Omni leaks, GitLab layoffs, Figure robots, and science citation fraud.

AI used to weaponize zero-days & TanStack npm supply-chain breach - Tech News (May 12, 2026)
0:007:45

Our Sponsors

Today's Tech News Topics

  1. AI used to weaponize zero-days

    — Google says it saw the first known case of criminals using an AI model to help discover and weaponize a zero-day, intensifying calls for tighter model release controls and faster patching.
  2. TanStack npm supply-chain breach

    — Dozens of @tanstack npm package artifacts were briefly published with malicious payloads, highlighting ongoing CI and open-source supply-chain risk across JavaScript dependencies.
  3. Claude Platform launches on AWS

    — AWS says Claude Platform is now generally available inside AWS accounts, simplifying enterprise procurement while adding IAM, CloudTrail auditing, and Marketplace billing—though data is processed outside AWS’s boundary.
  4. Gemini Omni video model leak

    — Leaked screenshots suggest Google is preparing a “Gemini Omni” video tool with strong in-chat editing and remixing, hinting at a broader multimodal push ahead of Google I/O 2026.
  5. Alphabet nears Nvidia in value

    — Investors are increasingly betting Alphabet can win across the AI stack—models, cloud distribution, and custom chips—narrowing the market-cap gap with Nvidia and reshaping AI leadership narratives.
  6. GitLab restructures for AI era

    — GitLab opened a voluntary separation program and is flattening management as it pivots toward agent-focused APIs, revamped CI/CD, and governance for human-plus-agent development workflows.
  7. Encrypted RCS arrives cross-platform

    — Apple and Google are testing end-to-end encrypted RCS messaging between iPhone and Android, closing a long-standing security gap for cross-platform texting when carriers support it.
  8. Figure robots coordinate bedroom cleanup

    — Figure showed two humanoid robots tidying a bedroom collaboratively without direct robot-to-robot messaging, signaling progress toward practical multi-robot coordination in real spaces.
  9. Fake citations surge in papers

    — A Lancet research letter reports fabricated references are rising fast in published papers, likely tied to AI “hallucinations,” raising alarms about peer review and scientific record integrity.
  10. Brain-controlled audio beats cocktail noise

    — Columbia researchers demonstrated a brain-controlled hearing system that boosts the voice you’re focusing on, a major step toward solving the ‘cocktail party problem’ in hearing assistance.

Sources & Tech News References

Full Episode Transcript: AI used to weaponize zero-days & TanStack npm supply-chain breach

Google says it’s now seen criminal hackers use an AI model to help turn an unknown software flaw into a working attack—something many feared, but hadn’t clearly documented until now. Welcome to The Automated Daily, tech news edition. The podcast created by generative AI. I’m TrendTeller, and today is May 12th, 2026. Let’s get into what happened, and why it matters.

AI used to weaponize zero-days

We’ll start in cybersecurity, where Google says it has identified what it believes is the first known case of criminals using an AI model to help uncover and weaponize a previously unknown “zero-day” vulnerability. Google spotted it after attackers used a Python script aimed at bypassing two-factor authentication on a widely used open-source admin tool. The vendor was notified in time to patch, but the bigger story is the signal: if AI lowers the cost of finding fresh vulnerabilities, defenders may have less time to react—and policymakers will push even harder for guardrails around the most capable models.

TanStack npm supply-chain breach

That warning also lines up with Google’s broader threat assessment: it says AI-assisted hacking has already moved from “emerging” to “industrial.” The claim isn’t that models are magically doing everything end-to-end, but that they’re accelerating the boring, time-consuming parts—like refining phishing, iterating on malware, and speeding up exploit research. The takeaway for everyone else is simple: assume attackers are scaling up. Security teams need faster patching, better monitoring, and fewer brittle secrets sitting in CI environments.

Claude Platform launches on AWS

Speaking of CI risk: the JavaScript ecosystem got another supply-chain scare. Researchers say dozens of npm artifacts under the @tanstack namespace were published with malicious changes, including an obfuscated payload that looked designed to steal credentials from automated build systems like GitHub Actions. TanStack’s postmortem points to a chained workflow attack—abusing trust boundaries and publishing via an identity-based “trusted publisher” flow rather than stolen npm tokens. It’s a reminder that modern attacks don’t just target code; they target the automation that ships the code.

Gemini Omni video model leak

And while we’re on AI and security, curl creator Daniel Stenberg shared a reality check on AI vulnerability scanning. Anthropic’s much-discussed “Mythos” model was used to scan curl, and the report initially flagged multiple “confirmed” issues. After review, curl’s team says only one was a real security bug, with the rest landing as false positives or non-security problems. The bigger point isn’t that AI scanning is useless—it’s that it’s becoming baseline, and humans still need to validate what machines claim, especially when the stakes include CVEs and panic headlines.

Alphabet nears Nvidia in value

Now to enterprise AI adoption: AWS says Claude Platform is generally available directly inside AWS accounts. The practical win here is reduced friction—teams can use Anthropic’s native Claude APIs and tooling, but with AWS-style identity controls, centralized billing, and audit trails via CloudTrail. The important footnote: AWS says the service is operated by Anthropic, and requests are processed outside the AWS security boundary. So it’s a great fit for organizations that prioritize procurement simplicity and governance alignment—less so for teams with strict data residency constraints.

GitLab restructures for AI era

On the Google side, a leak may have revealed what’s next for Gemini: a “Gemini Omni” video model that appears built around editing and remixing clips directly in chat, not just generating video from scratch. Early reactions to raw visual quality sounded mixed, but the editing claims—like rewriting scenes with simple instructions—are what got people’s attention. If this shows up officially at Google I/O, it’ll be another sign that the big AI race isn’t just about smarter text models; it’s about shipping creative tools that slot into everyday workflows.

Encrypted RCS arrives cross-platform

Markets are reacting to that broader AI push, too. Alphabet is increasingly being framed as a full-stack AI beneficiary—consumer distribution through Search and YouTube, enterprise scale in Google Cloud, competitive Gemini models, and its own chips that it wants customers to use more broadly. That combination has investors talking about Alphabet potentially overtaking Nvidia in market value. The deeper message is that “who wins AI” may depend less on any single model release, and more on who controls multiple layers: distribution, infrastructure, and the economics of running AI at scale.

Figure robots coordinate bedroom cleanup

Inside developer tooling, GitLab is restructuring around what it calls an “AI era” strategy, including a voluntary separation program, fewer management layers, and a smaller footprint across countries. Leadership says it’s not simply cutting costs, and that savings will be reinvested into agent-oriented platform work—things like better governance and CI/CD designed for a world where humans and autonomous agents both ship changes. For employees and customers, it’s still a big moment: it shows how quickly the big software platforms believe they must reorganize to stay relevant as coding agents become normal.

Fake citations surge in papers

In mobile privacy, Apple and Google are rolling out end-to-end encrypted RCS messaging in beta, enabling secure texting between iPhone and Android—when carriers support it. This closes a long-running gap where cross-platform chats often ended up less protected than iMessage-to-iMessage conversations. The key is interoperability: encryption that works across the two dominant phone ecosystems. If it scales smoothly, this becomes one of those quiet security upgrades that most people never asked for explicitly—but benefit from every day.

Brain-controlled audio beats cocktail noise

Robotics had one of the day’s more eye-catching demos: Figure released video of two humanoid robots tidying a bedroom together—hanging a coat, putting items away, and making a bed—quickly and without direct robot-to-robot communication. The interesting part isn’t the chore itself; it’s the coordination. Getting two machines to work around each other safely, in a messy real environment, is exactly what you’d need for homes, hospitals, and warehouses. Demos aren’t deployments, but the direction is clear: general-purpose physical automation is inching forward.

Two final stories from research. First, The Lancet reports a sharp rise in fabricated citations in scientific papers—references that look real, but don’t exist—likely linked to AI tools that confidently invent plausible sources. Even if the total number is still small, the rate is climbing fast, and the risk is serious: fake citations can mislead future studies, and in medicine they can pollute the evidence chain behind guidelines. The obvious fix is automated reference checking at submission, but the cultural fix is just as important: authors can’t outsource accountability to a tool.

And in health tech, Columbia University researchers shared what looks like the first direct evidence in humans that a brain-controlled hearing system can help a listener lock onto a single voice in a crowd. In a controlled setting with patients who already had brain electrodes for medical reasons, the system detected which speaker the person was paying attention to and boosted that voice in real time. It’s still early, and today’s setup isn’t a consumer device—but it points toward hearing tech that follows your intent, not just the loudest sound in the room.

That’s the tech landscape for May 12th, 2026—AI pushing into security, platforms, and even how we hear, while the risks around software supply chains and research integrity keep rising alongside the benefits. If you want, send me the one story you think will matter a year from now, and I’ll follow the thread. Until next time, thanks for listening to The Automated Daily, tech news edition.

More from Tech News