Weekly Links: Insight, Software Taxes, and Global AI Competition

The global AI race heats up, and we discuss whether restrictions make sense. Software taxes are attacked, but they probably make sense for ever more automation coming.

Weekly Links: Insight, Software Taxes, and Global AI Competition

This week Stewie the Robot flew Southwest, AWS Agents will soon be able to accept USDC, and Intercom became Fin (AI eats software?).

On to the main stories:

  • When Knowledge Is Cheap, Insight Is Everything: Jevons Paradox applied to Torah Learning (@ZoharAtkins). This is a nice Twitter thread that applies thinking about what happens when knowledge is abundant to a deep study of the Torah. I'm not familiar with Torah studies, but this resonates a lot: bodies of knowledge contain atomic knowledge and over time (with study) compound insights form and unform by joining dots. You need "compute" and questions to unlock those insights. For centuries, humans have been the computers. Now we can apply more compute, but you still need to ask the right questions and explore the landscape, as well as recognize the gems you find.
  • Newsom Pitches Software Tax to Raise Billions in New Revenue. This proposed move in California didn't get a lot of coverage - except some outrage. Probably because multiple US states already have taxes on SaaS software. It may well never happen either. However, I'm mentioning it here because it seems to me that as AI and Automation accelerate, the only way to really provide healthy social safety nets anywhere is to do exactly this: tax AI, automation, and probably all software more heavily. The unit of work is shifting from the worker to automation; we also need to shift the focus of taxation.
  • Google adds Gemini-powered dictation to Gboard, which could be bad news for dictation startups. This is a minor news item and not unexpected, but there are some cool multi-lingual demos. More importantly, though, GBoard (which this attaches to) is widely used on Android devices, and this highlights how hard it will be to compete with Google on interface-related innovations. Other dictation apps that were blossoming can't integrate this highly into the stack, and barring anti-trust issues, you can see this new type of interface simply shipping as default with the operating system in the future.
  • 2028: Two scenarios for global AI leadership. This article from the Anthropic leadership team focuses on the US-China race for AI. There are important points in the text (like the significant power of new models). The core argument of the article is that "Democracies must stay ahead," and in order to do so, access to chips for Chinese firms should be restricted, as well as the ability to distill models. Specifically, the piece argues that by doing this, there is more likely to be a global "AI Safety" conversation rather than a flat-out race. Leaving aside that this would be commercially convenient for Anthropic (although in the long run, I'm not even sure about that), and definitely agreeing that AI used detrimentally by state actors could be terrible, I think this is naive and short-sighted. The AI cat is out of the bag; with every advance an AI lab makes, the world now knows that advance is "possible". The surface area of what AI is and how boundaries can be moved forward is also vast (chips, yes, but algorithms, applications, tool use, etc., etc.). Whether it takes 6M, 2 years, or 10 years, each new breakthrough will be. The article talks about 2028, but we will be there in the blink of an eye. In my view, it's better to look ahead at 2035, 2050, and beyond. The idea of a year-by-year race to stay 12-24 months ahead makes no sense. It would be much better to acknowledge the fact that most parties with the means (financial and human capital) will have close to cutting-edge capabilities over time. I would argue that this actually makes it more likely that a global safety consensus will emerge, versus trying to hold various parties down.
  • The jobs apocalypse: a (very) short history. A nice summary of the discussion about the possible AI jobs crisis from the Economist. The article suggests that it's less visible or likely than most people perceive. Annecdotally, though, I'm personally definitely more people losing roles than I would have expected. Most of this is in the tech sector, which may be particularly affected, but it seems wider spread than that. My most interesting takeaway comes from the end: if AI really does cause significant unemployment, it will likely happen during the next significant downturn. This makes sense as it's then that businesses go under or have to pull on every last thread to become lean.

On that note of economic uncertainty and global AI competition, wishing you a great weekend.

PS: I'm also writing on our new /Spec27 Blog now. The first post on the five types of enterprise AI Agents!