Smokebox: AI Investing, Watching Neural Network learn and Open Source

What's new and interesting this week

Smokebox: AI Investing, Watching Neural Network learn and Open Source

Welcome to the SteampunkAI Smokebox! I aim to publish a weekly digest of some of the most interesting AI-related news items each week. There’s SO much news that this will only ever be a small sample, but it’ll be well curated :-).

In this week’s batch, I’ll mix in a few that have been around a bit longer:

  • Mosaic Ventures take on how to invest in AI. This is a nice, explicit framework (and a case for investments in Europe - yay :-)). I’d add at least one more axis for investing in LLMs, specifically where an AI application sits on the creative spectrum. The more a use case requires exploring novel solutions that a human operator can evaluate, the more likely an LLM will prove powerful in the short and mid-term. The more you want fully automated solutions in a narrow range, the less value you’ll get.
  • AI Image personalization in a tiny box from NVIDIA. This is big news that’s been largely missed. Using a set of locked keys (tokens), the method creates a highly specific, fine-tuned model that is quick to train and has a small memory footprint. This seems like something that will become more and more common in applications of LLMs: boiling down models to their essentials. Coming to a photo-me-booth near you very soon.
  • Watch a neural network learn. A cool visual guide to how neural networks learn. It's worth 20-ish minutes of your time if all the ArchiX papers are blurring together.
  • Yann LeCunn’s recent keynote at MIT - Objective-driven AI. I plan a deep dive into this in a future post, but it’s a valuable talk that dives deep into why today’s LLMs are only part of what we need for AI to work in many domains.
  • LLaMA2 isn't "Open Source" - and why it doesn't matter. A short, provocative post from Alessio Fanelli. The conflict between open sourcing, having large players Saasify your code, and the fact that code for an AI model is barely useful without weights comes together. I’m not sure “it doesn’t matter,” but it’s certainly true that software licensing will need a lot more head-scratching. (Also, if an LLM wrote your code, can you copyright it?)

Lastly, a bonus Sci-Fi book tip - I might do these regularly, too. For the first, I’ll go with Diaspora by Greg Egan. The opening chapters are the best description of the possible birth of a mind I’ve ever read. As we start creating more “minds” in silica, some of the feelings they might experience are worth bearing in mind!

Have a great weekend!