In this episode (recorded 2/20/23) we talk with Scale AI Generative AI Hackathon competitor Nathan about large language models, the speed of developments in artificial intelligence, and whether we’re all about to be murdered. The papers we mentioned in this episode are John Searle, “Minds, Brains, and Programs” and Emily M. Bender et al., “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” We previously interviewed Shahar Avin from the Centre for the Study of Existential Risk here.
The intro music clip is “I Have Come Out to Play” by Jonathan Richman & The Modern Lovers. You can find NonProphets on Blubrry, iTunes, Spotify, Stitcher and Google Play. If you haven’t already, you should subscribe to Robert’s newsletter on forecasting Telling the Future. As always, if you have questions, ideas for future episodes, or would like to suggest a possible black swan event, you can use our contact page or e-mail us at nonprophetspod@gmail.com. If you enjoy this podcast, please rate us and recommend us to your friends.