Saturday Links: AI Fast Food, Token Explosion, and Context Engineering

From AI Charter Schools to how reasoning models are driving the demand for AI Capacity. Another packed week of stories.

Saturday Links: AI Fast Food, Token Explosion, and Context Engineering

Almost at the end of the summer in the northern hemisphere, and news has been a little slower. Nonetheless, there have been some interesting stories:

  • Taco Bell rethinks AI drive-through after man orders 18,000 waters. AI systems that take fast food orders are in many ways on the extreme edge of stress testing. Imperfect audio, wide ranges of accents, and complex menus. It's interesting to see the walkbacks, but it seems likely this will be temporary. There's too much potential upside for fast-food companies to deploy this technology, even though it will reduce entry-level jobs.
  • OpenAI Acknowledges That Lengthy Conversations With ChatGPT And GPT-5 Might Regrettably Escape AI Guardrails. Guardrails built into chatbot systems aim to stop the underlying LLM from returning answers on problematic topics. This is extremely hard to do since many concepts can be expressed in complex ways. Seemingly, as context windows fill up with information from the discussions, guardrails are harder to enforce. It's possible that versions and variants of Anthropics' Constitutional AI might be needed to better combat this type of deep drift.
  • Tokens are getting more expensive (via Stratechery). Great piece that you everything about AI demand and cost that you need to know. The press headlines all show the cost per token of output going down, but the hidden trend is that the number of tokens consumed and returned by AI to respond to a query has risen rapidly. Reasoning models all take many more steps to respond to a prompt, and agents can often take a long time before returning a result. All this means that more computation is expended for every input token. Demand for tokens seems set to rise for the foreseeable future, and being profitable as an AI application or model company might be further and further out of reach.
  • AI-driven private schools are popping up around the U.S., from North Carolina to Florida. Alpha Schools is implementing a model with two hours of teaching core academics per day + learning & exploration powered by AI for the remainder of class time. While I think AI could genuinely create learning experiences that are additive. It seems reckless to so quickly adopt systems so fully in a school context when we barely understand the potential impact.
  • A Survey of Context Engineering for Large Language Models. A term that's been popping up with more regularity in AI circles in the last few weeks is Context Engineering. There are a number of definitions and also a nice course by David Kimai. This paper provides a useful overview of what Context Engineering is currently considered to be. I suspect all these definitions will end up being too rigid, but the core concept makes a lot of sense. Today's LLMs have operated in a vacuum - taking queries in and returning text or images back. To use AI for long-running and repeated tasks, though the whole environment matters: memory, previous conversations, the time of day, the state of the stock market, etc. All of these elements make up the world in which a computation takes place. From an engineering point of view, the challenge becomes how to decide what context is made available for any particular query. More context seems better, but it can rapidly make a system drown in unwanted details. This is a modern manifestation of the classic AI Frame problem. As AI evolves, we'll clearly move from prompt engineering to context engineering, but we'll also need to get very good at managing context overload for systems.

Wishing you a great weekend.