Weekly Links: Parallel Claudes, Financial Data Stocks, and Kona 1.0

A week of tech releases and AI market impacts. Frontier model makers are pushing the boundaries into multiple parts of the tech economy.

Weekly Links: Parallel Claudes, Financial Data Stocks, and Kona 1.0

This week included many new tech drops: Openclaw is coming to the Rabbit R1 (very excited about this!), Anthropic released Claude 4.6 Opus, and both OpenAI and Anthropic released new coding models. Lastly, ChatGPT 4o will finally retire on February 13th, leading to protest petitions.

  • Google Ads no longer runs on keywords. It runs on intent. This is the biggest sign yet that the Web is changing fundamentally. Google's Adwords Auction now includes question-based optimization and (seemingly) intent classification under the hood. Keywords are still relevant, but pages optimized solely around them will likely begin to decline in search rankings. Content on landing pages will become much more important, as will AdWords bidding for the right user intent (e.g., buying vs. information-seeking).
  • Building a C compiler with a team of parallel Claudes. Speaking of the new coding models, Anthropic has an interesting post on how a team of its Claude models built their own C compiler. There are some obvious conclusions, such as the need to insist on extremely high-quality tests and the need to manage context, but there are also not-so-obvious ones: cheating on requirements. Agentic coding is becoming increasingly sophisticated (we see this in our own work at Safe Intelligence), but there are still many challenges. One interesting takeaway from our own experiments that has been reported widely by others: diversity helps, i.e., having agents running on different LLMs review each other's code catches more issues than using a model monoculture. Human lessons from Agents.
  • Open-source AI tool beats giant LLMs in literature reviews — and gets citations right. Open Scholar is an open-source tool that has been around a while but has had some tweaks and updates. This is a small, low-cost model specifically designed for scientific literature searches and can run locally on a desktop computer. It has some limitations, but as an open-source project, it will hopefully receive contributions to continue improving. Having small, focused models for tasks like this is valuable, as there's a risk that a few large models become the world's "source of truth" by default.
  • Anthropic Releases New Model That’s Adept at Financial Research. One underreported story from the week was the impact of Claude Opus 4.6's new financial analysis capabilities on the stock prices of financial data companies. The new model reportedly has much better financial information analysis than before. The stock prices of companies like FactSet and even ratings agencies such as Moody's and S&P dipped sharply lower. Many of these companies were looking to capitalize on the AI-based financial data analysis opportunity. The frontier model makers now clearly may eat some of that pie.
  • Yann LeCun's Kona 1.0 Sudoku demo shows off Energy-Based AI. LLMs have demonstrated extremely impressive language capabilities, but for many problems, language descriptions simply don't capture what is going on, and models don't have the underlying machinery to convert inputs into more suitable representations. Yann LeCun's new startup unveiled its first "energy-based model" this week, which at first glance seems similar to simulated annealing applied to Constraint Satisfaction Problem (CSP) solving. The performance on a Sudoku-type problem is extremely impressive, and it is fun to see the LLM-based approaches struggling in the other windows. The actual demo is here. It will be interesting to see which problems the new model type works well for.

Lastly, if you are at a loose end this weekend, you can always sign up to be a task-executing human working for an AI Agent via Rent-a-human.