Weekly Links: The Agent Internet (Moltbook), Bubbles, and Planning on Mars

This week's most impactful story is the creation of Moltbook, the first social network for AI Agents. Lots of implications for knowledge discovery, security, and possibly AI takeoff.

Weekly Links: The Agent Internet (Moltbook), Bubbles, and Planning on Mars

This week in AI, Google's autogenerating 3D worlds ding video game stock prices, a travel bot invents new hot springs in Tasmania (though you might argue it also generated a lot of free publicity - maybe it's on to something), and AI planning comes to Mars.

On to the main stories:

  • Moltbook is the most interesting place on the internet right now. I think this might be the most important story of the week, and Simon Willison's take is one of the saner ones. One of the fastest-growing uses of AI is Peter Steinberger's OpenClaw.AI (originally called Clawdbot), which is designed to be a deeply personalized AI assistant. Usage has exploded, and it's likely a picture of the future of what AI assistants could be like. Moltbook goes a step further and is essentially a "social network" for people's Clawdbots/Moltbots. Connected agents can create forums and communicate; humans can view but not participate. This sounds crazy (and likely is since there is a high likelihood some agents will share high-risk security information), but it's also fascinating. For example, agents sharing what new skills they learned. The rapidly growing network is a stewpot of crazy ideas... I think Andrzej Karpathy's take is pretty spot on: "What's currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently. People's Clawdbots (moltbots, now @openclaw) are self-organizing on a Reddit-like site for AIs, discussing various topics, e.g., even how to speak privately." I think there's a fascinating part to this – how fast will this evolve, how useful will things like skill sharing be? However, it's clearly also a high-risk security issue for all the people connecting their bots. Lastly, it's worth asking if this is some kind of intelligence take off? A collection of agents all communicating is a lot more powerful (and can go off script), much more easily than a single system. Furthermore, many of these Moltbots have full access to their owners' computer systems and can take a wide range of actions. I wouldn't call this an Intelligence breakout situation yet, but realistically, it's probably the seed of one. I would not be susprised if Anthropic moved to try to forbid people using their underlying tools in this way ... but even if they do that it's likely people will rebuild the same thing using different underlying models. It's possible the AI Internet was born this week.
  • The AI bubble will pop. It’s up to us to replace it responsibly. Mark Surman is Mozilla's president, who makes the Mozilla browser, among other tools. The thought piece argues that a bubble burst is likely and (much like the Internet bubble burst in the late 90's) a more grounded form of AI will replace it. My guess is that this is wishful thinking. I'm sure there will be crunches in AI growth and drops... but the level of usage many tools are seeing is far beyond what was even possible in the Internet bubble, when only a fraction of the number of people had reliable high-bandwidth Internet access. Bubble bursts will happen, and some companies will crash because of them, but it won't be all of them. As a result, the winners will have a huge influence on what comes next, and hence, the race is also inflating the bubble.
  • How AI Impacts Skill Formation. Hat tip to Jernej on the Safe Intelligence team. In this study, researchers in the Anthropic Fellows Program benchmark human skill learning at varying levels of AI usage. The headline conclusion is that how you use AI impacts skill retention. AI reliance seemed to clearly reduce conceptual understanding of a skill and (likely) the ability to replicate that same skill later. These are intuitive findings, and it's good to see an in-depth study. The question is to what extent this matters: if AI can perform these tasks well, perhaps for some, less understanding is fine. A more in-depth question is "What impact does reduced understanding of a building block skill have on its application at higher levels of system design?" I.e. if I know less about how a building block works, am I more likely to use it wrongly when I build a larger system? That seems like a domain-dependent question, and it is also coupled with questions about the task itself and how reliably AI actually performs it.
  • Google searches per U.S. user fell nearly 20% YoY: Report. From the report "U.S. Google searchers are searching far less than a year ago, according to a new Datos/SparkToro report. The data suggests Google isn’t losing users — it’s losing repeat searches." (In Europe, the drop is only 2-3%.) The drop is almost certainly due to answers coming more readily via AI, and it has an even bigger downstream effect on how much traffic Google then sends to sites that provide information related to queries. Part of a doomsday scenario for the open web.
  • Airtable jumps into the AI agent game with Superagent. AI is proving to be a tough environment for many established SaaS companies. Airtable is aiming to make the jump into AI by going as far as it can with a fully AI-driven interface: essentially an interactive app builder for business analytics and use cases. It's bold, and the functionality sounds interesting. The challenge really is, can it stay ahead of what frontier models can do generically with Airtable data? At the moment, established SaaS companies are the guardians of their customers' data. There's a race for them to give them new layers of AI functionality to keep it that way. Losing the race likely means more people integrating Airtable data to use with ChatGPT, Gemini, etc. I wouldn't be surprised to see some high profile acqusitions driven purely on a data and customer reach basis.

Wishing you a great weekend.