Sunday Links: ChatGPT outages, Device Form Factors, and LLM Failure Clickbait

ChatGPT outage scares everybody, Apple paper on LLM reasoning, LLMs.txt

Sunday Links: ChatGPT outages, Device Form Factors, and LLM Failure Clickbait

Another busy week in AI. The ChatGPT outage actually happened the week before last, has the world recovered from the trauma yet? Here are this week's links:

  • Yep, GoPro Should Be Really Worried About Meta’s New ‘Performance’ Smart Glasses. Meta is releasing a new iteration of the smart glasses, this time with the Oakley brand. I have a version of the Ray Ban Meta's and I think they are already an amazing capable device (action videos, voice queries, phone calls, and music play without in-ear earphones). This really could be a form factor shift.
  • Salesforce adds AI to everything, jacks up prices by 6%. This isn't going to end well. Follow up on last week's curbs on Slack data export, Saleforce is raising prices for added AI functionality. The enterprise of the future is likely going to be a powerful data layer connected to everything, a powerful workflow layer with built-in AI + customizable UIs. The more open, the better at every layer. It's hard to see an argument that Salesforce will become the provider of choice at all layers. Most organizations have a lot more than sales, marketing, and customer support. (Also, if you have been a Salesforce customer for any length of time, a 6% price rise is fairly run of the mill...).
  • Google: No AI System Currently Uses LLMs.txt. The idea behind LLMs.txt is that you add a file named like this to your site, and it will tell AI crawlers which pages are permitted for use as training data. It seems that none of the crawlers are even reading the file at the moment. I guess that since none of the crawlers respect it, no one is invented to be the first to cut themselves off from freely accessible web data.
  • ChatGPT is back following global outage — here's what happened. Actually, no one really knows what happened at OpenAI that caused outages, but what did happen was significant panic amongst knowledge workers across the world. The short time it has taken for millions of people to build the convenience of quick answers into their daily workflows is quite extraordinary. Most users will be aware that Chatbot answers can be flat out wrong sometimes so you would expect uptake to be a bit more cautious, but this clearly hasn't been a barrier for rapid trusted usage. "Is it good enough" became "I hope it's good enough" very quickly.
  • A Knockout Blow for LLMs? A happy Gary Marcus writing about Apple's critical LLM reasoning paper (and including a clickbait title). Apple's paper (here) analyzes chain-of-thought and other reasoning approaches currently used in leading-edge LLMs. As the complexity and depth of reasoning goes up LLMs quickly become lost and near-random in their answers, even with reasoning turned on. This would seem like a knockout blow for LLMs, but in my view, it really isn't. It has always been clear that LLMs are not working with a high-fidelity world model under the covers; they are working with collections of loose associations that, for simple problems, turn out to be enough. What I expect AI evolution to look like is 1) much more specialist training that helps LLMs reason better for very specific tasks (they can get good but not expert), and 2) tool use where LLMs start to use the same simulators and computer solutions humans do. LLMs are a truly unique breakthrough in computing and give us an association layer we've never had before, but the way to scale them is not to try to make them PhD-level experts in everything; it is to make them expert tool users.

You might have also noticed a subtle difference in this week's headline image. I still like the oil-painting vibe from previous posts, but ... Midjourney broke my workflow with their update this week (Thanks Midjourney!). Haven't figured out how to get this back yet.

Wishing you a great weekend.