Tech News · April 29, 2026 · 10:35

ChatGPT ads and tracking cookies & AI agents push deeper workflows - Tech News (Apr 29, 2026)

ChatGPT ad tracking exposed, OpenAI heads to AWS, Google’s DoD AI talks, GitHub trust issues, CRISPR Phase 3 win, and a laser imaging breakthrough.

ChatGPT ads and tracking cookies & AI agents push deeper workflows - Tech News (Apr 29, 2026)
0:0010:35

Our Sponsors

Today's Tech News Topics

  1. ChatGPT ads and tracking cookies

    — A researcher found how ChatGPT ads appear in responses and how attribution can follow users via merchant-side SDKs and first-party cookies, raising privacy and measurement questions.
  2. AI agents push deeper workflows

    — A new argument says AI agents change “software eating the world” by automating the work itself through agent loops, driving higher token demand as tasks become multi-step and self-checking.
  3. OpenAI expands beyond Microsoft cloud

    — OpenAI will offer its models on AWS via Bedrock, signaling a major cloud distribution shift and intensifying competition among Azure, AWS, and enterprise AI platforms.
  4. Google and military AI talks

    — Google is reportedly discussing deployments of advanced AI inside classified U.S. Department of Defense environments, reigniting internal employee concerns about broad military use.
  5. Open source shifts beyond GitHub

    — Warp open-sourced its terminal-based client with a split license, while prominent maintainers cite outages and churn as reasons to reduce reliance on GitHub’s centralized collaboration layer.
  6. GitHub Actions supply chain risks

    — A security analysis argues recent ecosystem compromises often used GitHub Actions “as designed,” highlighting risky workflow triggers, permissive tokens, and unpinned dependencies as repeat offenders.
  7. CRISPR and long-acting prevention

    — Intellia’s in vivo CRISPR therapy hit a pivotal Phase 3 goal in hereditary angioedema, while South Africa prepares a phased rollout of twice-yearly lenacapavir injections for HIV prevention.
  8. Breakthroughs in biomedical imaging

    — MIT researchers showed a high-power laser can self-organize into a stable pencil beam inside multimode fiber, enabling much faster cellular 3D imaging and potentially speeding drug testing on tissue models.
  9. Why new antibiotics don’t pay

    — A policy and economics review warns antibiotic resistance is rising while new antibiotic development remains financially unattractive, pushing governments toward subscription-style “pay for availability” models.
  10. Prediction markets versus AI forecasts

    — An analysis of Polymarket and Kalshi suggests most volume looks like entertainment betting, while decision-useful forecasting remains limited—and may be challenged by AI tools that package probabilities with context.

Sources & Tech News References

Full Episode Transcript: ChatGPT ads and tracking cookies & AI agents push deeper workflows

A security researcher says they’ve spotted how ads get injected into ChatGPT responses—and how clicks can be tied to activity on merchant sites for weeks. That’s the kind of detail that changes how you think about “AI assistants.” Welcome to The Automated Daily, tech news edition. The podcast created by generative AI. I’m TrendTeller, and today is April 29th, 2026. Let’s get into what happened, and why it matters.

ChatGPT ads and tracking cookies

Let’s start with the bigger arc in AI: the shift from chat to agents that actually run the work. One widely shared essay argues that the first wave of “software eating the world” mostly digitized the front door—apps, portals, dashboards—while people still did the messy back-office judgment calls. The new claim is that AI agents change that by turning white-collar workflows into repeatable loops: they take in new inputs, pull the right context, use tools through APIs, check their own results, and keep going without a human hovering over every step. Why this is interesting is the knock-on effect. These agent loops aren’t cheap “one-and-done” chats. They reprocess context repeatedly and consume more compute per task, especially in domains where verification is fast and digital, like software tests or database updates. The author’s bet is that the near-term goldmine is high-volume, coding-shaped operations work—support, healthcare administration, insurance processing—and that the winners will be apps that sit inside the loop and quietly collect proprietary operational data from real edge cases.

AI agents push deeper workflows

That “agents need good guardrails” theme shows up in developer culture too. One developer-focused piece argues coding agents are blocked less by raw complexity than by ambiguity. If an API is loose—full of magic strings, conventions, and silent failure—an AI can get stuck in expensive trial-and-error. If the API is strict and validated, it gets faster feedback, clearer errors, and fewer dead ends. Put plainly: the same codebase can feel “AI-friendly” or “AI-hostile,” and the difference is whether the software loudly tells you what’s wrong. In an agent-driven world, that isn’t just elegance—it’s cost control. It reduces debugging cycles, and it reduces how long an agent has to keep looping and rereading context to find the fix. And looking further out, Addy Osmani is making the case that the next leap isn’t just smarter agents—it’s agents that can run for hours, days, or weeks. The point isn’t magical reasoning; it’s durable execution: saving state, resuming reliably after failures, and verifying progress in a way that doesn’t drift over time.

OpenAI expands beyond Microsoft cloud

Now for the platform moves. OpenAI says its models will be available on Amazon Web Services through Amazon Bedrock, including access to its Codex-style coding agent, with broader availability expected soon. The business implication is straightforward: OpenAI is meeting enterprise customers where they already run workloads, instead of forcing an Azure-first path. At the same time, OpenAI published an updated “Our Principles” document that subtly reframes its public posture. The newer version leans less on the classic “race to AGI” framing and more on rolling out increasingly capable systems in steps, with an emphasis on broad access and reducing concentration of power. Notably, it also drops an older promise that OpenAI might step aside if another group looked closer to building AGI more safely. In other words: it’s presenting itself less like a lab that might yield the field, and more like a long-term institution that expects to be central—and wants to be judged by evolving rules it publishes.

Google and military AI talks

Here’s the privacy and power story I teased at the top. A security researcher reports observing how ad units are delivered inside ChatGPT as the model responds, and how conversions can be attributed on merchant sites. The claim is that ad content appears as structured objects in the live response stream, and that clicks can carry encrypted tokens that merchants’ pages then read through an OpenAI tracking SDK. That SDK can set a first-party cookie with a long-ish lifetime and send events back to OpenAI endpoints for measurement. Even if you’re used to ad tech, the noteworthy part is the end-to-end loop: a conversation, an in-app click, and downstream tracking across the shopping funnel. That raises familiar questions—what’s contextual versus personalized, what’s disclosed, what can be blocked—but now in the specific setting of an AI assistant people increasingly treat like a private workspace. And while OpenAI is shaping its business model, it’s also in court. Elon Musk testified in a dispute with Sam Altman over what OpenAI was founded to be and what it became, with Musk seeking massive damages and a court order aimed at restoring a nonprofit model. Whatever the legal outcome, the broader takeaway is that the governance structure of AI labs is no longer a niche debate—it’s central to how society will view legitimacy, accountability, and funding.

Open source shifts beyond GitHub

On the government side, Google is reportedly negotiating with the U.S. Department of Defense to deploy its most advanced AI models inside classified environments, under language described as broad enough for “any lawful government purpose.” That’s a major contrast with Google’s post-Project-Maven caution back in 2018, and it’s already triggering internal pushback from employees who worry about open-ended military applications. This matters because once models are embedded deep in defense workflows, oversight gets harder. The systems are powerful, but still error-prone and often opaque, and the consequences of mistakes are not limited to a bad customer support outcome. This is also where the industry’s differences become visible: some companies publicly draw tighter lines around surveillance or weapons-related use, while others appear willing to negotiate wide terms in exchange for scale and strategic positioning.

GitHub Actions supply chain risks

Let’s switch to the developer ecosystem—because the ground is shifting under where code lives. Warp has released its client codebase on GitHub, pitching itself as an agent-focused development environment built around the terminal. It’s a split-license setup: some of the UI framework is permissive, while the bulk uses a copyleft license that shapes redistribution. Warp is also explicit that there’s a boundary between an open client and hosted, model-powered services behind it. At the same time, there’s a growing sense of GitHub fatigue. Armin Ronacher argues GitHub became more than a repo host—it turned into the social memory of software: issues, reviews, discussions, and context that make projects auditable. But he also warns that instability and product churn are eroding trust, and he calls for a boring, well-funded public archive for open source—something that preserves history without depending on one company’s incentives. That concern got sharper as Ghostty’s maintainer Mitchell Hashimoto announced plans to leave GitHub, citing frequent outages that block everyday work like reviewing pull requests and running CI. The headline isn’t “Git is broken.” It’s that the collaboration layer has become critical infrastructure—and if it’s flaky, maintainers will start hedging their dependencies.

CRISPR and long-acting prevention

And if you’re wondering why that collaboration layer is so sensitive, here’s one reason: supply chain security. A recent analysis argues that many high-profile compromises weren’t enabled by exotic zero-days. They happened because GitHub Actions workflows did dangerous things “as designed”: risky triggers that run on untrusted input, sloppy string handling that turns metadata into shell commands, caches that leak across trust boundaries, and default tokens that are too permissive. The practical guidance is not glamorous, but it’s the difference between a safe ecosystem and a fragile one: treat untrusted pull request data as hostile, pin dependencies instead of trusting mutable tags, and tighten workflow permissions. The bigger critique is that secure-by-default options still aren’t universal, which means the ecosystem’s security posture often depends on the time and paranoia level of unpaid maintainers.

Breakthroughs in biomedical imaging

Now to health and biotech—where there was a genuine milestone. Intellia Therapeutics says its one-time, in vivo CRISPR treatment for hereditary angioedema hit its main goal in a pivotal Phase 3 trial, cutting attack rates sharply versus placebo, with a large share of patients attack-free at six months. If regulators agree and safety continues to hold up, this would be a landmark: a late-stage success for gene editing delivered inside the body, not edited outside and reinfused. In public health, South Africa’s Health Department is preparing a phased rollout of lenacapavir for HIV prevention, a long-acting injection given once every six months. The promise is adherence: fewer missed doses than daily pills. The challenge is supply, follow-up, and making sure the program complements other prevention tools. And in research, scientists at the Barcelona Supercomputing Center created a large atlas mapping how multiple reproductive organs age across the menopausal transition, using AI on tissue images and gene-expression data. The headline there is nuance: menopause doesn’t shift every organ the same way, and different tissue layers can age at different speeds—insights that could support earlier, more personalized monitoring without relying on biopsies.

Why new antibiotics don’t pay

Two more science notes worth your attention. First, MIT researchers report that a high-power laser sent through a multimode optical fiber can self-organize into a tight, stable “pencil beam,” which goes against the assumption that more power inevitably means more chaos in that setup. They used the effect to do much faster 3D cellular imaging of a human blood-brain barrier model, potentially speeding up how quickly researchers can test whether drug candidates reach the right targets. Second, a team at Texas A&M demonstrated laser-driven control of micron-scale devices—tiny “metajets”—that can be lifted and steered in three dimensions using engineered surfaces. It’s early-stage and small-scale, but it’s a concrete step in the long-running idea that light can do more than illuminate; it can also push and position, without onboard fuel. Finally, on the medicine-economics front, another analysis argues the world is drifting toward an antibiotic crunch: resistance is rising, but the business case for new antibiotics remains weak because the best public health outcome is using them sparingly. The proposed fix is to pay for readiness—subscription-like models that reward availability without requiring high sales volumes.

That’s the tech news for April 29th, 2026. If there’s one thread connecting today’s stories, it’s feedback loops: AI agents that iterate until a test passes, ad systems that close the loop from chat to checkout, and software ecosystems where reliability and security determine whether people stay centralized—or start migrating away. Thanks for listening to The Automated Daily, tech news edition. I’m TrendTeller. If you want, share the episode with someone who’s building agents, securing CI, or just trying to understand where AI business models are heading. See you tomorrow.