Transcript
Startups selling Slack data & China pushes UN AI governance - AI News (Apr 18, 2026)
April 18, 2026
← Back to episodeSome failed startups are reportedly selling their entire Slack histories—messages, emails, even Jira tickets—to AI companies. Not user forums. Not public blogs. Internal workplace chatter. Welcome to The Automated Daily, AI News edition. The podcast created by generative AI. I’m TrendTeller, and today is April 18th, 2026. We’ll cover what that new data market could mean for employee privacy, a fresh push for UN-led AI governance, big moves in coding agents, and why the AI infrastructure boom is starting to look less like a single gold rush and more like a long, negotiated supply chain.
Let’s start with the story that’s likely to make a lot of people look at their old workplace chat logs differently. A report cited by Fast Company says defunct startups are increasingly selling archives of internal communications—things like Slack messages, emails, and project tickets—to AI companies as training data. The sums can be meaningful, especially for a company shutting down. The problem is obvious: those records often contain personal details, context about health or performance, and identifiable moments that weren’t written for a public audience. Even with anonymization, the risk is that “workplace history” becomes a permanent, tradable asset—without clear consent from the people who created it.
On the governance front, a coalition of sixteen Chinese scientific and technology associations issued a joint initiative calling for an open and effective global framework for AI governance, ideally under a United Nations umbrella. The document leans hard on people-centered AI, public benefit, and keeping systems under human control, while also naming a range of risks—from misinformation and privacy leaks today to longer-term concerns like loss of control and autonomous behavior. Politically, the subtext matters: it argues against technological hegemony and for equal participation in rule-making, with special emphasis on helping developing countries close what it calls the global “intelligence gap.”
Now to model releases, where the pace hasn’t slowed—only diversified. Anthropic pushed Claude Opus 4.7 into general availability, positioning it as better at difficult software engineering and long-running, multi-step tasks. Two angles stand out. First, Anthropic says the model is more literal about instructions and more likely to verify its own work—exactly the kind of reliability improvements teams want when LLMs move from chat to execution. Second, Opus 4.7 ships with new cyber safeguards that actively detect and block high-risk requests, plus a verification program for vetted security pros who need legitimate access for testing.
Anthropic also found itself adjacent to a classic Big Tech tension: partnership versus competition. Mike Krieger, Anthropic’s chief product officer, stepped down from Figma’s board, disclosed in an SEC filing, at the same time reports swirled that Anthropic may add AI-powered design tools that could overlap with Figma’s core territory. Even if the product details stay fuzzy, the story illustrates a broader pattern—frontier model providers increasingly bundle capabilities that look like features of existing SaaS categories, and that changes how partners, boards, and investors think about conflicts and competitive risk.
In developer tools, OpenAI is pushing Codex beyond “help me write code” toward “help me run the whole workflow.” The Codex desktop app now supports background computer use—agents that can see the screen and interact with apps—plus parallel agents on macOS. That matters because a lot of real development work lives outside clean APIs: clicking around a UI, iterating on a frontend, or validating behavior in a local environment. OpenAI is also layering in PR review help, richer previews, and options like SSH into remote dev boxes, aiming to make Codex feel less like a chat window and more like a daily driver.
OpenAI’s developer cookbook added a practical companion to that story: guidance for using “sandbox agents” to modernize legacy codebases more safely. The key idea is separation of powers—keep orchestration and secrets in a trusted host process, while file edits and shell commands happen in an isolated sandbox. For organizations doing large migrations, the real value isn’t that an agent can change a lot of code—it’s that the changes can be split into reviewable patches, validated by tests, and accompanied by audit logs. In other words: automation that fits the way engineering teams actually manage risk.
We’re also seeing momentum around smaller, more deployable models—especially ones that are friendly to edge devices. PrismML announced “Ternary Bonsai,” a family of ultra-compressed language models that use three weight states instead of full precision, aiming for a middle ground between tiny footprint and acceptable quality. Meanwhile Alibaba’s Qwen team launched a Qwen3.6 repository, emphasizing open-weight availability and improvements for agentic coding and repository-level work. The pattern is clear: more teams want models they can host, tune, and run economically—without betting everything on a single closed API.
Open source maintainers are grappling with the second-order effect of code agents: contribution volume goes up, but trust and review effort can get worse. Hugging Face engineers shipped an agent “Skill” and a separate test harness to speed ports from Transformers to Apple’s MLX ecosystem, while keeping output reproducible and verifiable. The interesting part isn’t just faster ports—it’s the process design: constrain the agent, bake in checks, and give reviewers independent artifacts so they don’t have to take an LLM’s word for it. That’s a blueprint we’ll likely see repeated across open-source projects trying to stay healthy in the agent era.
Zooming out to infrastructure, the money and the commitments keep getting bigger—and more complicated. The Information reports OpenAI may spend over $20 billion across three years on servers powered by Cerebras chips, potentially with warrants that translate into a meaningful equity stake. If true, it’s another sign that inference demand is reshaping the compute market: it’s not only about training the next model, it’s about reliably serving tokens at scale. At the same time, a widely discussed interview with Nvidia CEO Jensen Huang paints a picture of upstream semiconductor commitments, supplier coordination, and a market that’s being structured through long-term relationships as much as raw benchmarks.
And competition in compute isn’t just Nvidia versus everyone else. Business Insider reports xAI plans to supply tens of thousands of GPUs to Cursor to help train Cursor’s next coding model. For Cursor, it’s an access-to-scarce-hardware story. For xAI, it’s a strategic pivot: becoming more of a compute provider for others, not only a lab training its own flagship models. If that trend expands, we may end up with a clearer split between “model brands” and “compute wholesalers,” even when those roles sit under the same corporate roof.
On the consumer side, Google is trying to make AI assistance feel less like a separate destination and more like a native part of browsing. Chrome is getting upgrades that bring AI Mode features directly into the browser, including a side-by-side view where you can read a page and ask follow-ups with the page’s context. It also adds the ability to pull context from tabs you already have open. The bigger point is behavioral: Google is betting that the future of search is not just a query box—it’s an ongoing, context-rich session that lives alongside the web, not on a separate page.
Finally, websites are starting to face a new question: not “is my site mobile-friendly?” but “is my site agent-friendly?” Cloudflare launched a scanner called “Is Your Site Agent-Ready?” that checks whether a site exposes basic signals for discoverability, permissions, and access—things agents need if they’re going to browse responsibly, authenticate correctly, and potentially transact. Strip away the branding, and the story is about standards pressure: as more AI agents operate on the web, sites will demand clearer controls, and agents will demand clearer interfaces. The web may be heading toward a more explicit contract between publishers and automated visitors.
That’s the briefing for April 18th, 2026. The throughline today is accountability: who gets to set the rules, who gets the compute, and whose data becomes training fuel—sometimes without them even knowing. Links to all the stories we covered are in the episode notes. Thanks for listening to The Automated Daily, AI News edition. I’m TrendTeller—see you tomorrow.