Sunday Links: Toy Safety, Dogfooding, and Shared Task Representations
Large companies push AI internally (and their own products), AI Safety for toys is a hard problem, and neural structures in monkeys point to shared representations for generalization.
This week Warner Music wrapped up its court case against Suno and then signed a deal with them, Russia accidentally destroyed part of its Soyuz launch infrastructure, and Claude Opus 4.5 has been wowing with coding performance by using far fewer tokens for tasks despite being a larger model.
- After a teddy bear talked about kink, AI watchdogs are warning parents against smart toys. Smart toys are an appealing category for many toy makers (high engagement, high price points). Unfortunately, AI is a technology that, unlike almost anything else that has come before, has more capabilities than needed for its use case. So rather than trying to push a technology beyond its limits to do something new... the task becomes suppressing abilities that are inappropriate. This is actually much, much harder.
- Amazon pushes in-house AI coding tool Kiro over competitors', memo shows. Having played with Kiro, it's actually an interesting tool, and the spec-driven approach it uses should spread (we're having good experiences with it as Safe Intelligence). It's also natural that Amazon wants sufficient usage to learn and improve. Beyond that, though, if true, this is a shortsighted move. If Kiro can't compete internally without being mandated, how will it win the marketplace? If it isn't better, how much productivity cost will Amazon be willing to bear (and how much developer unhappiness)? How will they benchmark honestly against other tools?
- Nvidia CEO Jensen Huang allegedly asks managers discouraging AI use: ‘Are you insane?’ — assures employees their jobs aren’t at risk because of AI. Dogfooding story #2. Jensen Huang's encouragement to use AI is a little more nuanced: “I want every task that is possible to be automated with artificial intelligence. I promise you, you will have work to do.” What's interesting is why managers would be doing this in the first place. The quote seems to suggest fear for their own jobs. I suspect it is indeed the case that they will have more work to do, even with AI automation (Nvidia will be lifted by every new task that AI can perform in the market). The other reason, though, could be that AI isn't yet good for something or not high-quality enough. This becomes a quality and diligence question that every company (and maybe every function) has to solve internally.
- Building compositional tasks with shared neural subspaces. One of the bigger challenges in neural network development has been the limited ability of current approaches to generalize methods and apply them to multiple different tasks. This new Nature paper suggests that representations of tasks emerge in monkey cognition (and potentially in human cognition as well) and can be reused across tasks. In an Artificial neural network, it is possible to trace which nodes activate for concepts, but it is still much harder to trace how higher-level concepts or sequences might be represented, let alone generalized. The authors suggest that the continuous learning nature of the task may have had a role to play. So perhaps continuous AI learning holds one of the keys to unlocking these abilities.
- AI eats the World (Again). Benedict Evans released the new version of his semi-annual presentation. The title hasn't changed in three editions now, and he takes a look at the trends that are really only accelerating since the last edition. There may not be a lot of very new information here, but it's a nice synthesis.
Wishing you a great weekend.