Friday Links: Tiny LLMs, Apple OpenSource and Perplexity raises $73M

Friday Links: Tiny LLMs, Apple OpenSource and Perplexity raises $73M
Midjourney: The TinyLLama Mascot as a baby steampunk robot llama running

Week 1 of 2024, and it's already Friday. In case you missed the update earlier this week, this newsletter is now coming to you via Ghost and not Substack. Happy New Year, and here are the links I found most interesting so far this week:

  • Tiny LLama: Putting a powerful LLM in your pocket. "Micro" is becoming a thing (which is excellent news). Tiny Llama is a 1.1B parameter model that was trained on fairly accessible hardware in just a couple of months (that's important) and is small enough to run fast on a powerful laptop (ok, a new MacBook Pro 3Max, but still). This is a big deal since it does fairly well on benchmarks and suggests that for many applications, we simply won't need to make a call to cloud servers. Apple's strategy is looking very solid right now. Speaking of which...
  • Apple quietly launched an open-source multimodal LLM called Ferret: the model is also small and is designed to be good for object identification and association in images. There's been speculation that since it was initially based on Open Source code, Apple is sharing it to comply with the Open Source license so they can ship it within a product soon.
  • Specialized LLMs are coming too: JPMorgan launches DocLLM: This is clearly still research, but it's interesting seeing large organizations focus on very specific use cases. The system here is for working with common format document types typically found in businesses: invoices, receipts, memos, contracts, etc. These documents have information in the text they contain but also in the layout of information. Identifying both in combination gives higher accuracy and flexibility.
  • Perplexity raises $73M. One piece of funding news that stood out was AI search/information retrieval startup Perplexity. They raised their series B to bring the total raised to $100m. Congrats to the team: the copilot idea applied to the search is definitely powerful, and it's a strong combination of LLM results and search. The big challenges are likely to be: 1) how to build enough of an audience that chooses perplexity over Google and 2) how to damp down hallucinations to a minimum. In perplexity's case, there is still a search to ground answers, and since it's a copilot, the human is in control, but automation will likely need to increase over time as user expectation rises.
  • The New York Times is suing OpenAI and Microsoft for copyright infringement. In what is probably the highest profile such case so far, the New York Times alleges that OpenAI trained on millions of NYT stories and now returns much of that text for certain queries. They argue that OpenAI is a direct competitor, causing harm to their business. This argument is likely to run and run. The NYT would certainly have a point if answers coming back are really close to verbatim, likely less so if they are generic. What they are certainly right about is that chatbot interfaces are highly likely to become a competing interface for reading full stories. On the other hand, the NYT might end up with the kind of result they don't want: 1) OpenAI removing NYT content from training and inference and 2) returning only current news from other partners like AP/Springer. This would result in a scenario where different chatbots/AIs serve content from a fragmented set of cultural and media sources. In any case, OpenAI should really not be returning verbatim text.

Wishing you a relaxing and peaceful weekend.