Friday Links: AI reads the Internet, multi-language data sets, and highly realistic generated video

Friday Links: AI reads the Internet, multi-language data sets, and highly realistic generated video

Here are this week's links.

  • Cohere releases Aya: a model and dataset focused on support for 101 languages. Getting native language support in AI is a really important goal for equal access (and ultimately also for cultural preservation). Cohere's project crowdsourced and processed expert-level language data sets and translations. It's great to see that all the datasets are open source for others to build on top of.
  • Who makes money when AI reads the Internet for us? What happens if browsers start to strip down content to just text, becoming adblockers and AI summarize in one? This will be valuable for many users but painful for publishers. The phenomenon has also existed for a long time with ad-blockers and reading apps such as pocket. This future will likely be all too real and begin to break ad-based business models. Google's Chrome is incentivized not to do this, but other browsers might. Content subscriptions might have to become an increasing part of the future.
  • How is GenAI Impacting the gaming sector? Nice deep dive by Alexandre Dewez of Eurazeo and the Fly Ventures team. Some important takeaways are just how hard it is to pull off a tech venture in the gaming space and that Gen AI and that AI could drive both concentration in the industry (large studios win) + an explosion of creativity from small studios. I'm less convinced that a new games engine is an option (or needed) - something which is also mentioned in the article - but more convinced strides need to be taken to unlock the current distribution logjams the industry has. Without a less expansive route to market, many innovative game companies won't be able to compete, and AI may just raise the volume of offerings so much that small studios cannot get enough of the pie.
  • OpenAI Sora raises the bar for text to video. OpenAI releases demos of a new text-to-video service they are launching. The service produces videos of up-to-the-minute and photorealistic levels of quality. The highlights reel here is definitely worth a watch. The team claims that part of the goal is to build a better understanding of the physical world and how prompt objects sit within it. This is definitely not perfect yet and there are glitches in "physics" visible in many of the demo videos. Still, it's an impressive video, and it's no doubt turned the heat up on companies like Runway. It will also raise more anxiety amongst artists and filmmakers due to the speed of progress.
  • Magic.Dev raises $145M to build an AI code co-worker, not just a co-pilot. The company aims to build a developer product with a much deeper context window that could take part in long-running projects. The goal is pretty hyperbolic. I'm not sure whether it's tongue-in-cheek or a real aim. Leaving aside whether this is "possible" or "desirable," the biggest question is, really, why build an AI version of something we already have rather than something that is massively more suited to the task?

Wishing you a great weekend!