Sunday Links: Group ChatGPT, Deep Learning Weather Models, and American AI Regulation?
OpenAI releases group chats features for a subset of ChatGPT users, Anthropic pushes for AI regulation, and weather models get the deep learning treatment.
Apologies for the late post this week - some travel delayed things! In any case, here are this week's news links:
- Introducing group chats in ChatGPT. OpenAI's announcement this week didn't get much coverage, but I think it's actually huge: you can now interact with AI as a group. When doing this, all users in the chat see all responses to all questions to the AI. I suspect this will be a massive change to how AI is used for work. Until now, ChatGPT has been a personal productivity tool; now, any group of people can share a context. It also seems like it could change dynamics radically in the personal sphere and start stealing share from WhatsApp and other group messaging services. When answering the classic "where shall we go for dinner" question, having AI in the "room" will really change the experience. Facebook will now have another competitive front to fight, and Google will suddenly have to think hard about how to include group functionality in search.
- Traditional SaaS May Be Dying. But It Might Also Be Your Fault. Jason Lemkin on the SaaStr blog often posts thoughtful strategy and tactics posts for Software as a Service. As AI drives change, in this post, he details some of the business behaviors that could damage SaaS usage significantly. SaaS has had a rocket ship ride, but it's clear that some SaaS solutions will come under increasing pressure. It's not normal that part of the reaction could be to act defensively by locking in customers, raising prices, and trimming costs. Done badly, though, it is the start of a downward spiral.
- Google updates its weather forecasts with a new AI model. Google has been developing a deep learning based forecasting model that they say is now good enough to include in customer-facing products. The new model does away with complex physics-based simulators that take hours to run. Instead, the deep learning approach is trained on huge amounts of data to perform a similar "what's next" function as language models do. I'm quite surprised this approach is already as accurate as physics-based models, given that these models have been continuously developed for 50+ years. Weather is an extremely complex domain. What this result suggests is that even the state-of-the-art physics models aren't capturing all the important relationships, but deep learning AI seems to find them.
- He’s Been Right About AI for 40 Years. Now He Thinks Everyone Is Wrong. Yann LeCun is one of the best-known research figures in deep learning and is indeed leaving Meta later this year to start a new company. This write-up of his work is quite detailed, but the title is inaccurate according to LeCun, who says he doesn't disagree with anyone. I don't know him personally, but from his fairly open social media posts, I'd say he is quite generous in supporting all approaches to a problem. It is true that he is a proponent of trying to improve explicit world models in training input and at runtime, which is contrarian to much of the field (and I agree with), but I don't see him spending much time trash-talking other approaches.
- ‘I’m deeply uncomfortable’: Anthropic CEO warns that a cadre of AI leaders, including himself, should not be in charge of the technology’s future. In a 60-minute interview with Anderson Cooper that aired this week, Anthropic CEO Dario Amodei called for more regulation of AI, arguing that tech leaders like himself should not have free rein on all developments. At the same time, his company, Anthropic, also published a post making a similar argument. I think he is certainly right that more regulation is needed (in the United States and elsewhere), but what is notably absent is what such regulation should look like. It would make sense to me for leading labs to take the initiative and propose an industry-wide code of conduct that reflects strong safety practices. I'm not in the AI Doom camp that believes AI will lead to the destruction of humanity, but there are clear possible harms that should be mitigiated.
Wishing you a wonderful week.