Sunday Links: Service Now Agents, Nondeterminism, and Resilience by Design
Perplexity scraping, Safe Intelligence new Substack, Nondeterminism, Robot Massages.

As of this week, you can now engage an AI cofounder, PWC is slowing down junior hiring, the Judge in the Anthropic case says there is more work to do, and Hugging Face introduces a handy data transformer. Here are this week’s most interesting links:
- ServiceNow brings Vibe Coding to enterprise workflows collapsing app development from weeks to minutes. Building networks of agents for specific tasks is a clear deployment path for AI in the enterprise, and there are numerous AI Agent startups targeting this market already (Crew.AI, … and others). It always seemed inevitable that incumbent players would also enter the area. ServiceNow is one of the best placed of these. I don’t think Vibe coding is really the right headline, but simply the steady layer of AI into existing infrastructure. Companies like ServiceNow are arguably much better placed than hyperscalers (too low in the infrastructure) or office suite vendors like Microsoft, Google (not connected enough to core systems), to take advantage of this opportunity.
- Perplexity's definition of copyright gets it sued by the dictionary. Perplexity continues to be the wild child of AI content scraping. A new lawsuit from Merriam-Webster and Encyclopedia Britannica alleges that the company invents definitions and subsequently attributes them to their brands. Perplexity seemingly has a history of doing this. I’ve been a big fan of Perplexity since launch, and it’s good to see some medium-sized players mixing with the giants. Inventing content and misattributing are serious trust issues, however. It’s unlikely this is down to LLM Hallucination errors since it reportedly occurs primarily when Perplexity has been prevented from scraping pages with its web crawler. The company is reportedly raising $200 USD at a $2B valuation; hopefully, it will cease this practice.
- Defeating Nondeterminism in LLM Inference,. Ever wondered why LLMs give different answers to the same question, even if the temperature is turned down to zero? The temperature parameter is supposed to fix how creative the LLM is in answering the question. When set to zero one would expect identical answers. Not so. This paper is a great deep dive into non-deterministic responses in LLMs, even when temperature is set to zero (a hint: floating point matrix multiplications and batch sizes in processing.
- Robots now give Massages,. This seems like a hard technical challenge and it’s impressive to see that the experience is actually pretty good. Having said that, I really wonder if it makes sense to try to automate every job.
- Lastly, this week I wanted to highlight some new great content on our company substack, Reslient by Design. Huge credit to the team and especially Brain John Aboze for the content stream. The substack dives deep into how to make Machine Learning systems resilient when deployed. It combines deep technical content with event reports and higher-level views. I'll showcase a few other articles here in a little while as well, but if you're interested in the topic, please consider subscribing there as well
Wishing you a great weekend!