Saturday Links: Agents meet Banks, Watermarking, and Google Pushes Harder

The EU begins to set rules for AI watermarking, Google and Mistral release new models + UK Banks begin adopting Agentic AI.

Saturday Links: Agents meet Banks, Watermarking, and Google Pushes Harder

This week, Databricks raised $4B, one of the original Robot companies, sadly, declared bankruptcy, and cars also take flight in China.

On to this week's longer stories:

  • Agentic AI race by British banks raises new risks for regulator. A number of the leading banks in the UK have actively talked about deploying Agentic AI applications in customer-facing use cases. This is consistent with a general drive to adoption, but non-trivial in the banking sector with heavy regulation. The UK's FCA explicitly warns that the agentic (no humans in the loop) applications require particular scrutiny. At Safe Intelligence, we're actively working in this area to provide better validation solutions. AI can be magical, but deployment requires careful thought and monitoring.
  • Commission publishes first draft of Code of Practice on marking and labelling of AI-generated content. This week, the EU began to outline the specifics of how AI content might be watermarked. Long-time readers will know that I think trying to watermark to "clearly label deepfakes" is really a lost cause (you can read the full argument here). If we do anything, it should be the opposite: certifying the provenance of content we wish to be trusted. The EU first draft document itself is here. On the one hand, I admire that people are trying to grapple with the very real problem of fake (and especially misleading) content; on the other, when reading through the draft, it is obvious what a minefield it is. Even pre-AI, enormous amounts of our content were digitally manipulated, and the most serious problems are from deepfakes, where the creator has all the interest in the world to remove watermarks. It also leads to statements like this: "Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work." (from the draft). This is either a giant exemption or a technical impossibility. In a creative work how will an AI marker be both obvious to the user and not negatively impact the enjoyment of the work?
  • The Year in Slop. The New Yorker reviews the year in AI-generated memes (aka Slop). When the New Yorker does this, is that a protest, or has AI-slop now truly "arrived"?
  • Mistral releases the Mistral 3 family of models. Credit to Mistral for staying the course on open source with Apache 2.0 licensed models. Other labs have slowed down their open source releases. Mistral's new models also include small, efficient models (at 3B, 8B, and 14B sizes) for use in edge applications. Lastly, the company also released an updated CLI coding agent. It is good to see these progressions in open source since it genuinely matters that innovation can spread to anyone who can run the code.
  • Gemini 3 Flash launches and becomes the default Gemini app + AI Mode in Search. Google keeps pushing the boundaries of performance with its low price point models, and also adding them into its free products. At a time when OpenAI seems worried about falling behind, the Google bet seems to be to keep the pressure on by making its free experience better and better. I would guess that most of ChatGPT's users are already attached enough to their AI usage in ChatGPT form that even somewhat better answers will not persuade them to switch.

Wishing you a great weekend and holiday period!