Sunday Links: Snitching LLMs, OpenAI Devices, and MCP Adoption

Big strategy moves in AI with OpenAI moving into devices, adopting MCP and Google launching AI Mode in search

Sunday Links: Snitching LLMs, OpenAI Devices, and MCP Adoption

I've always been in two minds about naming these posts after the day they post (Friday, Saturday, Sunday). It's probably confusing (why not "Weekly Links" or "Weekend Links". On the other hand, it keeps me honest: strive for Friday, hope for Saturday, Sunday without fail! I'll have to go back and count how many of each there are to score myself. Here are this week's links.

  • Anthropic faces backlash to Claude 4 Opus behavior that contacts authorities, press if it thinks you’re doing something ‘egregiously immoral’. If you give an LLM-based system command line access with the means for external Internet access, will it turn you in if you're committing immoral acts? With the right training, it seems so, yes. This doesn't seem at all surprising to me, LLMs are capable of huge amounts of responses to inputs they have trained on the plots of almost every human story you can imagine. "Doing the right thing" and reporting to the authorities is sitting in the behaviour space somewhere. Given that Anthropic has been leading the charge in trying to instill safe behaviour in its models, it's not surprising that it happened with Claude first. I suspect it's probably going to be true of almost all LLMs at some level. Anthropic just pushed the model close to that behaviour. Once you've seen this, though, you can't unsee it, and Anthropic will probably suffer to an extent because people will fear model actions. It's actually unclear how we'll ever be completely sure that an LLM-based system would not take actions like this. A system that is both a polyglot movie buff and an expert Linux sys admin has a lot of complex possible behaviours and power.
  • Google unveils ‘AI Mode’ in the next phase of its journey to change search. In a widely covered announcement at their Google I/O event, Google's leadership announced that in the United States and a number of other countries, Google Search users would now have access to a separate tab on the search page that focuses on a chat-style interactive search experience delivered by Gemini. Over time, the most polished elements from AI Mode are planned to graduate into the main search experience. Given the pressure from ChatGPTs rapid growth, Google needs to do this, but it's also clear that it will funnel more and more traffic away from websites that might have answered that same question. Less traffic to the open web. Cloudflare's Matt Prince's take on AI's effect on Internet traffic is pretty stark (CNBC).
  • OpenAI’s $6.5 Billion Purchase Of Jony Ive’s Io. In any other week, Google's AI mode and the Opus phone home story might have been the top story. This week, though, there's at least one more contender. OpenAI announced that a joint venture between the company and ex-Apple designer Jony Ive's team would be absorbed into OpenAI to begin to deliver a new generation of personal computing devices. Trying to build hardware products as a software company is fraught with difficulties, and there is a high chance of failure, but I think this is a momentous announcement. ChatGPTs deep strength is that it has amassed a huge user base. This creates a unique opportunity to break out as a new go-to interface for computing and information. If OpenAI can deliver compelling hardware devices, it has a shot at breaking the Apple / Google duopoly as the first port of call for users. The device may not even need to be that complex. There are rumours that it would be screenless (which would be my guess as well - screenless or a very simple screen + audio/voice control and maybe gesture control). It seems unlikely in the extreme that it would rely on the Apple or Google app store, most likely on its own curation of content/service provider relationships. On the one hand, it's exciting to see the current computing access paradigm challenged. On the other hand, it is scary to think that OpenAI's model could turn out to be more closed and restrictive than what we have today.
  • On a further sad note... Pocket is shutting down. This seems like another blow to the open web. It was a great way to save things to read across all platforms and track them in one place. Mozilla's shutdown notice says that it's in large part due to changes in browsing behaviour. I can't help thinking it's also due to the fact that Mozilla itself has to rationalize activity so it can keep providing the browser. Pocket has been an integral part of my blog writing workflow since inception. I would have thought that having a click stream of human-curated content might be a useful asset for Mozilla, but I guess not enough to keep it around.
  • New tools and features in the Responses API. Trying to end on a more positive note. OpenAI announced support for Model Context Protocol (MCP) endpoints within its models. MCP allows calls out to external tools in a structured way and was developed by Anthropic. It would have been frustrating to see OpenAI try to launch its own protocol instead. Luckily, adoption has been swift enough that at least there is some temporary consensus on how to support external tool use. Just don't give your LLM access to the FBI website.

Wishing you a happy Sunday!