Friday Links: Nougat, Safety, Hurricanes and AI Weapons

AI really is reaching every corner of our lives

Friday Links: Nougat, Safety, Hurricanes and AI Weapons

Happy Friday! Here are this week’s links:

  • Pretraining on the Test Training Set is all you need (via Ethan Molik): a groundbreaking paper that shows how fast you can train an AI model if you give it the test answers up front. This does make a serious point however: in many of today’s trained models, we actually don’t know how must of the test set is in the training set. In some domains (with repetitive tasks), it might be fine. In others, it will mean systems fool as badly on how performant they are.
  • Nougat formula scanning to enhance AI Training data: Existing human knowledge is key for training AI. This paper describes steps towards decoding the formulas in scientific papers to feed that training. This is important to deepen the kind of knowledge we can train on. (via Jim Fan on LinkedIn, who also adds context).
  • AI Safety was top of the agenda this week with the US President’s Executive order on AI  and the UK Government’s AI Safety Summit. Press coverage has been largely positive on the broad thrust of the objectives of both but mixed on the details. More on this in an upcoming post. In the meantime: Techcrunch’s take (with a take from Sheila Gulati), and Tim O’Reilly’s take, including disappointment that today’s models are most “out of scope” for monitoring.
  • From tropical storm to Category 5 Hurricane in 5 hours: when prediction models fail. Last week’s devastating Hurricane in Mexico, with its loss of life and destruction wasn’t only a human tragedy but also showed how hard it still is to predict such phenomena: there was almost no warning for the residents in the path of the storm. Whether this is due to global warming or other factors, the upset at failed predictions also highlights the fact that we’re already very dependent on AI models. From warnings to insurance premiums, our simulations and AI predictions drive a lot of decision-making.
  • Palmer Lucky - Inventing the Future of Defence (Invest like the best podcast): Palmer Lucky (previously the founder of Oculus Rift) runs Anduril Industries, which is one of the leading new startup players in the US defense industry. The podcast only tangentially touches on AI, but it’s 100% worth your time to listen. Listening in, I think it would be clear to anyone that AI will be deployed in military settings by Western and probably many governments. Probably the scariest line in the podcast is “Quantity has a quality all of its own”.

Wishing you a restful weekend.