Sunday Links: A Snapshot of the AI Future
Sunday Links: A Snapshot of the AI Future with five interviews from today's leaders that outline thinking about what's next.
This week saw Perplexity get sued again (they really do seem to be skating close to the edge), the New York Times dropping an article on the dysfunctions and prevalence of "AI Voice" in writing, and a raccoon had the time of its life in a liquor store.
However, we’re going to ignore all of these. Instead, I decided to change formats just a little this week and compile a brief list of recent deep-dive podcast interviews with leading people in AI.
These five interviews all came out in the last few weeks, and I think together they give a really interesting view into the current state of AI/ML technology + where it might be going.
A couple of them I’ve actually linked to before, but repeating with more commentary and because I think they fit together nicely in a collection:
- Ilya Sutskever on the Dwrakesh Podcast. Ilya has been involved in many of the language model breakthroughs of the last 10-15 years. After co-founding OpenAI with Sam Altman, he went on to found Safe Super Intelligence (SSI). This interview is one of the first detailed conversations on what SSI is working on since it raised more than $3B in the past two years. I found the interview fascinating, and Ilya is one of the few AI leaders trying to give answers to what we might actually need to do for true superintelligences to act in ways that are "safe" for humanity. His answers suggest that SSI believes there are more efficient ways to train AI and that future AI will be more capable of bootstrapping its own knowledge. It also seems clear that they are working on how to encourage AI-Human alignment. The answers may not be particularly comforting, but they do (I think) make a lot of sense. Starting with a respect for sentient life seems like a good place to begin when trying to train beings that one day may well have power over human civilization.
- Fei-Fei Li & Justin Johnson of World Labs on the Latent Space Podcast. Fei Fei Li is one of the pioneers of Machine Vision (best known for establishing ImageNet, one of the earliest Image Data sets that powered real breakthroughs in image neural networks). Fei-Fei and Justin are the founders of World-Labs, which recently released a new foundation model for spatial reasoning. The interview goes into how the World Labs team thinks about Intelligence and how it is often grounded in physical worlds, something that is very much missing from today's LLMs. Hat tip to TrishaRRtip for this.
- Satya Nadella on the Dwarkesh Podcast. A second Dwarkesh reference, but this interview is just so good. The interviewers really push the Microsoft CEO on plans, risks to the business, and assumptions. Nadella rises to the challenge, though, and in the process lays out quite a compelling framework for how Microsoft as a business can thrive despite the rapid change in the industry. I think Nadella is too optimistic that current office suite productivity software users will turn into AI productivity users. On the other hand, I suspect he's probably right on the money that it's inference workloads that really need to be planned for.
- Maor Schlomo on 20VC. A clickbait title, as always, courtesy of Harry Stebbings, but a good interview. Maor is the founder of Base44, which was acquired by Wix and provides a vibe coding platform similar to Lovable, Replit, and Bolt. The interview is interesting because it delves into the motivations behind software development and how tools are likely to evolve to meet new needs. My hunch is that he's right that it won't take long for there to be many more non-coding "engineers" than coding engineers.
- Martin Fowler on Pragmatic Engineer. ThoughtWorks' Martin Fowler is a grounded thinker on software development, and it's interesting to hear his take on AI for software development. The discussion covers many of the obvious topics, like code gen, but also non-obvious ones like LLMs for analyzing legacy codebases. I particularly like his take on the idea that for long-term AI use on a project to succeed, we should think about how to craft the abstractions and terminology that apply to a project and underpin the interactions with AI.
Wishing you a great weekend!