Anthropic leak sparks cyber alarm & OpenAI’s GPT-Rosalind for biology - Tech News (Apr 18, 2026)
Anthropic “Mythos” leak jolts cyber leaders, OpenAI unveils GPT-Rosalind for biotech, and Google Photos meets Gemini—plus AI race, chips, robots.
Our Sponsors
Today's Tech News Topics
-
Anthropic leak sparks cyber alarm
— A leak of internal Anthropic materials exposed “Claude Mythos,” described as a high-end hacking-capable AI. Regulators and banks are now treating AI-enabled zero-days, critical infrastructure risk, and model containment as urgent priorities. -
OpenAI’s GPT-Rosalind for biology
— OpenAI introduced GPT-Rosalind, a model series aimed at life sciences research, drug discovery, and translational medicine. It’s being tested with groups like Amgen and Moderna, signaling intensifying competition in AI-driven R&D. -
Google Photos meets Gemini images
— Google is rolling out opt-in image generation that can reference a user’s private Google Photos through Gemini and Nano Banana. The feature raises fresh privacy and data-handling questions even as Google says Photos aren’t used to train models directly. -
China nears US in AI
— A Stanford HAI report says China has nearly closed the gap with the US on top chatbot performance benchmarks. The analysis highlights patents, citations, electricity capacity, and shifting talent flows as key drivers of the changing AI landscape. -
Biometric ID tests at Tinder
— Tinder and Zoom are testing World ID iris scans to add “proof of humanity” badges against bots and deepfakes. The move spotlights the tradeoff between fraud prevention and biometric privacy concerns. -
Europe bets on AI chips
— European startups are chasing unusually large rounds to build alternatives to Nvidia focused on efficient inference chips. The push is fueled by sovereign compute ambitions, export-control geopolitics, and rising demand for lower-power AI computing. -
UK launches Sovereign AI fund
— The UK unveiled a Sovereign AI fund of up to £500 million to back domestic AI startups with capital and compute access. Critics question whether the scale is enough to create true national champions amid global mega-funding. -
Energy shock boosts China clean-tech
— China’s exports of batteries, EVs, and solar cells jumped as the Iran war and Hormuz disruption pushed buyers toward electrification. The data suggests energy-security fears are accelerating renewables adoption and strengthening China’s supply-chain advantage. -
Japan’s humanoid robots, China lead
— Japan showcased humanoid robotics ambitions, but many standout machines came from Chinese manufacturers. The story underscores a shift toward competing on “physical AI” software and real-world reliability rather than only hardware. - 10
AI enters adult product industry
— At a Shanghai expo, adult-product makers showed AI chatbots and connected devices while worrying about legality, consent, and privacy. It’s a notable example of AI moving into sensitive consumer areas where regulation is still catching up.
Sources & Tech News References
- → OpenAI launches GPT-Rosalind, an AI model for life sciences and drug discovery
- → China’s Clean-Tech Exports Surge as Iran War Triggers Global Energy Shock
- → Google to Let Gemini and Nano Banana Generate Images Using Users’ Google Photos
- → Stanford Report: China Nearly Closes U.S. AI Lead as Researcher Inflow Slows
- → China’s Sex Toy Industry Tests AI-Powered Products Amid Legal and Privacy Concerns
- → Leak Reveals Anthropic’s ‘Claude Mythos’ Hacking AI, Triggering Banking and Government Alarm
- → Tinder and Zoom Add World’s Iris-Scan IDs to Fight Bots and Deepfakes
- → European AI Chip Startups Chase Nine-Figure Funding as Inference Demand Grows
- → Japan bets on ‘physical AI’ to close the gap with China in humanoid robots
- → UK launches £500m Sovereign AI fund, but scale pales next to global leaders
Full Episode Transcript: Anthropic leak sparks cyber alarm & OpenAI’s GPT-Rosalind for biology
A leaked set of internal documents has reportedly revealed an AI system so capable it’s being discussed like a potential hacking weapon—and it’s already pulled in central banks and national security officials. Welcome to The Automated Daily, tech news edition. The podcast created by generative AI. I’m TrendTeller, and today is april-18th-2026. Coming up: OpenAI aims at drug discovery with a new biology-focused model, Google pushes Gemini deeper into your photo library, and a fresh report suggests China is rapidly closing the AI gap with the US.
Anthropic leak sparks cyber alarm
Let’s start with the story that has cybersecurity teams paying very close attention. An accidental leak of internal Anthropic files has revealed details about something called “Claude Mythos.” Anthropic describes it as powerful enough to act like a high-end hacking tool—capable, in their telling, of finding and chaining together previously unknown software vulnerabilities. The headline here isn’t just “another scary AI rumor.” What’s notable is the response: reports say US financial leaders, including top figures at the Treasury and the Federal Reserve, have been convening banking executives to discuss the implications. In the UK, the Bank of England and security agencies are also preparing briefings, while the government’s AI Security Institute is getting early access to stress-test the system. Anthropic is limiting testing through a program called Project Glasswing, with a small set of major firms. Whether every claim holds up or not, the significance is clear: frontier AI is now being treated as a factor that can swing the balance between attackers and defenders—especially for banks, utilities, and other critical services that can’t afford to learn the hard way.
OpenAI’s GPT-Rosalind for biology
Staying with AI, but shifting to medicine: OpenAI has launched GPT-Rosalind, a new model series built specifically for life sciences research, drug discovery, and translational medicine. It’s named after Rosalind Franklin, whose work was central to understanding DNA’s structure. OpenAI’s pitch is that progress in biology isn’t just limited by lab breakthroughs—it’s also slowed by the sheer complexity of research workflows: combing through literature, connecting dots across datasets, planning experiments, and interpreting results. GPT-Rosalind is designed to reason over biological concepts like proteins, genes, disease pathways, and molecules, and to support multi-step scientific work rather than one-off answers. The company says it’s already working with major partners including Amgen, Moderna, the Allen Institute, and Thermo Fisher Scientific to test how it performs in real R&D settings. If it delivers, it could help shorten the time between an idea and a viable drug candidate—raising the competitive pressure in an industry that’s increasingly betting on AI as a force multiplier.
Google Photos meets Gemini images
Now to a more personal kind of AI: Google is rolling out a “personalized” image-generation feature that can connect Gemini to your private Google Photos library—if you opt in. The idea is simple: instead of uploading pictures into a chatbot, Gemini can reference what’s already in your photo archive and generate new images based on those memories. On the convenience side, it’s easy to see why people will try it—custom family scenes, stylized images, and quick creative edits without the friction of manual uploads. But the bigger story is access. This deepens how far a chatbot can reach into private data, and it raises the stakes around permission, controls, and what gets retained. Google says Gemini does not directly train its models on users’ Google Photos. Still, the company notes that some information—like prompts and responses—may be used, and Gemini can reference labeled people in Photos. The practical takeaway: if you enable this, you’re trading a lot of ease and personalization for a new level of intimacy between your AI assistant and your personal archive. Expect privacy advocates to scrutinize the defaults, the transparency, and how easily users can reverse the decision.
China nears US in AI
Let’s zoom out to the global AI race. A Stanford HAI report says China has nearly closed the US lead in AI, pointing to a sharp narrowing in benchmark scores between top American and Chinese chatbots from 2023 to early 2026. The US is still described as producing more top-tier models overall, but the report highlights China’s strength on several scale indicators—things like AI publication citations, patents, and industrial robot installations. It also points to heavy investment momentum after a 2025 “DeepSeek moment,” plus a strong pipeline of AI-linked public listings, and—crucially—lots of power-generation capacity to support rapid data-center growth. On the US side, the warning is unusual but practical: electricity. Analysts argue that an aging, underinvested power grid could become a real bottleneck for new compute. And there’s a people angle too: the report says America’s “brain gain” in AI is weakening, with fewer scholars moving to the US than in the past. Even if the US remains a magnet, the trend line matters—because leadership in AI isn’t just chips and capital, it’s also sustained talent flow.
Biometric ID tests at Tinder
That brings us to two different responses to the compute and sovereignty question—one from Europe’s private sector, and one from the UK government. First, Europe’s chip hopefuls. Several startups building alternatives to Nvidia’s GPUs are now chasing unusually large fundraising rounds, aiming to serve the booming demand for efficient inference—basically, the day-to-day running of AI systems at scale. One Dutch startup, Euclyd, is seeking a very large round to scale up and reach early customers, with backers tied to the ASML orbit. Investors are clearly reacting to geopolitics and supply-chain anxiety: export controls, dependence on a few manufacturing chokepoints, and the broader push for “sovereign compute.” But the hurdles are real too—long development cycles, limited foundry access, and a tougher funding environment than what US peers typically enjoy. Second, in the UK: the government has launched a “Sovereign AI” fund worth up to £500 million to help turn local startups into national champions, pairing investment with compute access and faster routes for skilled workers. The debate here is scale. Supporters see a strategic signal; critics argue the budget is small next to the massive sums circulating in US-led AI—and that Britain’s track record of nurturing ‘champions’ that later end up foreign-owned complicates the narrative. Either way, it’s another sign that AI capability is now being treated like infrastructure, not just innovation.
Europe bets on AI chips
Next, identity and the fight against deepfakes. Tinder and Zoom are partnering with the World network—run by Tools for Humanity—to offer optional iris scans that create a “proof of humanity” marker. The motivation is straightforward: AI has made impersonation cheaper and more believable, from romance scams to workplace fraud. Match Group says this could complement Tinder’s existing verification, and Zoom is positioning it as a defense against high-stakes deception—pointing to past cases where deepfake meetings allegedly triggered huge financial losses. World’s pitch is that iris patterns are unique and that the process doesn’t require sharing a name or address, with the ID stored on the user’s phone. Still, biometric verification is one of those ideas that always splits the room: stronger fraud defenses on one side, and the long-term privacy implications of normalizing eye scans on the other. The key detail is that it’s optional—for now. Watch closely whether “optional” stays optional as platforms hunt for ways to prove who’s real online.
UK launches Sovereign AI fund
Now for a story at the intersection of AI, consumer tech, and regulation—one that’s sensitive but increasingly hard to ignore. At a sex toy expo in Shanghai, adult-product companies showcased AI-driven devices including erotic chatbots, voice-enabled dolls, and connected products designed to sync with media or long-distance control. The business trend is that manufacturers are trying to move up the value chain by pairing hardware with software—essentially turning traditionally simple products into “smart” devices with personalized interactions. But exhibitors also flagged a major risk zone: privacy, consent, and compliance around machine-generated sexual content, especially anything involving face-swapping, celebrity likeness, or scraped adult material. In China, those concerns are amplified by strict rules—pornography is illegal in mainland China and many adult sites are blocked—so companies are walking a tightrope between innovation and regulatory exposure. This matters beyond the niche because it’s a preview of the bigger question: as AI seeps into intimate contexts, what safeguards—and what accountability—are actually in place?
Energy shock boosts China clean-tech
Finally, two stories that show how today’s geopolitical shocks are accelerating hardware adoption—both in energy and robotics. First, clean tech: China’s exports of lithium-ion batteries, electric vehicles, and solar cells rose sharply in March, according to customs data. The backdrop is the Iran war and the temporary shutdown of the Strait of Hormuz, which disrupted fossil-fuel supplies and pushed countries and consumers to look for alternatives as energy security concerns spiked. Analysts see this as an early signal that an energy shock can rapidly speed up electrification and renewables adoption—especially when gasoline prices surge and buyers want insulation from volatility. There’s also a policy wrinkle: some shipments may have been pulled forward ahead of changes to China’s export tax rebates for solar and batteries. Second, humanoid robotics in Japan. At the Humanoid Robot Expo in Tokyo, Japan highlighted its ambitions with demos like a human-sized warehouse-focused robot named Galbot. But the twist is that many of the headline robots were built by Chinese companies—underscoring China’s growing strength in manufacturing. Japan’s strategy, according to the reporting, is to compete by building “physical AI”—the software, sensing, and data systems that help robots act reliably in messy real-world environments. With an aging population and labor shortages, Japan has a strong practical incentive to make robots useful in factories and eventually homes. But public comfort remains a barrier, so the framing is shifting toward robots as collaborators, not replacements.
That’s our run through the day’s tech landscape: AI that could reshape cyber risk, AI that could speed up drug discovery, and AI that’s pushing deeper into our photos, our identities, and even the most private corners of consumer tech. If you want one thread tying it all together, it’s this: capability is rising faster than governance—so institutions are scrambling to catch up, whether that’s banks preparing for AI-enabled exploits or platforms trying to prove who’s human. Thanks for listening to The Automated Daily, tech news edition. I’m TrendTeller. We’ll be back tomorrow with the next wave of stories.