Weekly Links: $30B, White Collar Work, and Cognitive Surrender
Frontier AI labs get back on top with significant functionality gains. White collar works looks like it might indeed see mass replacement in the near future.
Apologies for the late post (due to travel). The weekend is almost over for most, but hopefully the links will help make the Monday commute more interesting. This week: Google ships WebMCP (turning the web into an Agent playground?), Bytedance's Seedance spooks Hollywood, and Cisco decides its agents need a babysitter. There's also a growing recognition that extreme (productive) AI use can lead to burnout, much like suddenly trying to manage a team of 24/7 shift workers would.
On to the main stories:
- Anthropic raises $30 billion in Series G funding at $380 billion post-money valuation. Frontier AI company fundraisers have become "blur of billions," so one take on Anthropic's raise would be "yup, expected". Another thought is to take a brief step back and reflect on the amount raised and the valuation. $30B is a huge cash injection (and they are not the only ones raising such a large amount). The company announced that its revenue rate was $14B (from zero three years ago), growing 10x per year. So they are raising 2x revenue, and the valuation is 2-4x forward revenue if their growth rate stays between 5 and 10x. No doubt projections are at least a 3-5x growth from here. What is important to reflect on, though, is that everyone involved must believe that revenue can grow into the many hundreds of billions in a few short years. This certainly means that Anthropic, OpenAI, and others will very likely be capturing not just software spend but also human worker replacement and/or market expansion. Certainly, AI-powered software will need to do much more than software did to make this growth possible. For context, $350B in valuation is higher than Coca-Cola ($338B today) and 2x Disney Corporation ($186B as of today).
- Microsoft AI chief gives it 18 months – for all white-collar work to be automated by AI. This piece anchored a slew of "job displacement" articles this week, including in the advertising industry, trucking, and accounting. 18 months seems too radical for a total collapse (technology takes a long time to diffuse), but it seems likely that we'll see more and more real effects in the next 18 months already. The capabilities of Claude and OpenAI, in particular, to not only power new "AI versions" of existing apps but to go "over-the-top" and begin to interact with existing software such as Excel, mean diffusion could go much faster than expected.
- Introducing Lockdown Mode and Elevated Risk labels in ChatGPT. OpenAI released a restricted use mode for its ChatGPT service that prevents web access to certain types. Another part of the announcement is warning labels for "high risk" AI use (sorry, "Elevated risk" according to the PR department). These features are undoubtedly useful, but they point to a significant underlying problem: AI is most useful when it has full access to the Internet and can act on the user's behalf, yet it is extremely difficult to prevent malicious manipulation from external sources. A cynical take would be that a defense from OpenAI will now be "well did you have safe mode enabled?" after serious data breaches. On a practical level, though, most users will plough on with unsafe usage. Much more needs to be done here – such as, for example, whitelisting safe sources (though that will further hurt the open web by creating gatekeeping controls.)
- Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender. This week's edition is rapidly turning into a gloomy edition, but this paper fits too well to leave out! This paper by two Wharton researchers describes experiments that demonstrate a phenomenon almost everyone who has used AI intuitively recognizes: the mental acceptance of inputs from an external system, even when we know it can only partially be trusted. The researchers suggest that AI acts as a type of System 3 cognition (in addition to the System 1 and System 2 often described) that sits outside the brain. It becomes too easy to trust these external stimuli, and we trust ourselves to "catch" the few cases when AI may be wrong. Of course, in practice, we analyze results with a much less critical eye than we would if the inputs came from a source we did not trust. This points to a serious problem with widespread use of systems that are mostly accurate but sometimes wrong. It is so much easier to kid yourself that a skim read of the autogenerated email is enough when it appears with the click of a button than to spend the time to really read it.
- GPT‑5.2 derives a new result in theoretical physics. To end on a more optimistic note, this is one of the standout cases of an AI model helping leading physicists go beyond what is already proven. AI will allow us to push the boundaries of our understanding when used in the right way. At some point, it will get increasingly hard to check the work, but at least with physics, the real world is a great testbed for verification! Another interesting axis would be using AI to try to design new instruments and measurement devices to do that verification.
Wishing you a great week. Perhaps we all need to add "non-AI" thinking days to keep our brains in shape.