Weekly Links: ChatGPT logs, the AI Economy, and NVIDIA Self Driving

2026 is already moving fast with self-driving releases from NVIDIA, courtcases and Anthropic pushing hard on Enterprise adoption.

Weekly Links: ChatGPT logs, the AI Economy, and NVIDIA Self Driving

I hope 2026 is treating you well so far. AI news accelerated again this week: A16Z raises a new $15B fundits and lays out its philosophy for investing it, Meta AI internal strife gets an airing, and bioweapon demos in Washington. ChatGPT can now also access health-related information. In non-AI news, we also have a wild gravity machine in China, and Italy is fining Cloudflare

  • News orgs win fight to access 20M ChatGPT logs. Now they want more. OpenAI is involved in an intense legal tussle over whether it must turn over user chat logs to news organizations seeking evidence of copyright infringement. The current rule stems from the case led by the New York Times against Microsoft and OpenAI. There is a clear tension here: user chat responses are private, and opening them to scrutiny could be a privacy violation. The case plaintiffs also accuse OpenAI of deleting chat logs with potentially infringing material. It seems likely that AI generation will increasingly add filters to prevent actual copyright infringement. That doesn't mean that there might not be sanctions here, but probably the news organisations are fighting a losing battle in the long term.
  • AI and the Next Economy. A great big picture overview of the economic challenges of AI by Tim O'Reilly. His key point is "You can’t build a prosperous society that leaves most people on the sidelines." I wholeheartedly agree with this. For AI to genuinely benefit humanity, it needs to be accessible to all as producers, not just consumers. I take a lot of heart in the ongoing improvements in open-source models since this means it is very hard for just a few companies to have a chokehold on AI benefits. O'Reilly makes the point that we have a long way to go in making AI genuinely economically productive for people. Another worrying trend is that Human-AI conversations and content disappear into a silo and are no longer partly public (as in online communities such as Reddit and Stackoverflow). I also agree with O'Reilly's conclusion: openness and diffusion are critical, otherwise centralization will create even larger divides than today.
  • Claude Code 2.1.0 arrives with smoother workflows and smarter agents. Anthropic is leaning harder into Enterprise and Agentic AI. They are also picking up enterprise brand-name users like Allianz. It is interesting to see this since Agentic systems are still much harder to control than user-interactive AI. Outside of developer tasks where there is a clearly set up execution environment to validate outputs, error rates can be high. Clearly, though, Anthropic aims to lead in sanding off the sharp edges from these systems.
  • A red pixel in the snow: How AI solved the mystery of a missing mountaineer. Using drone footage and AI image object detection, it's becoming more feasible to search large areas for missing persons. Sadly, too late for the climber involved this time, but with this new capability, hopefully increasingly more realistic for helping stretched search and rescue teams.
  • NVIDIA debuts new robotics and self-driving technology at CES. NVIDIA used this year's CES keynote stage to debut a new, highly capable Video->Action (VLA) model for driving applications. The approach follows what Tesla and other leaders in self-driving have been working on: it adds the entire action-decision loop into a single reasoning model rather than having a model work only on subtasks. The new models are open-source and include a simulated training environment. They may help other driving players catch up with leaders to an extent. From a technical perspective, making such systems reliable is a huge feat. Currently, the system only allows up to Level two autonomy, but there is more to come, no doubt.

Wishing you all a great weekend!