We discuss the prospects for the development of artificial general intelligence; why general intelligence might be harder to control than narrow intelligence; how we can forecast the development of new, unprecedented technologies; what the greatest threats to human survival are; the “value-alignment problem” and why developing AI might be dangerous; what form AI is likely to take; recursive self-improvement and “the singularity”; whether we can regulate or limit the development of AI; the prospect of an AI arms race; how AI could be used to be undermine political security; Open AI and the prospects for protective AI; tackling AI safety and control problems; why it matters what data is used to train AI; when we will have self-driving cars; the potential benefits of AI; and why scientific research should be funded by lottery.
You can learn about the related work Robert does on the Global Catastrophic Risk Institute website. Papers and articles mentioned in this episode include Mike Rogers, “Artificial Intelligence—The Arms Race We May Not Be Able to Control”, Dario Amodei et al., “Concrete Problems in AI Safety”, and Paul Christiano et al., “Deep Reinforcement Learning from Human Preferences”. The Asilomar AI Principles are available here.
You can find NonProphets on Blubrry, iTunes, Stitcher and Google Play. As always, if you have questions, ideas for future episodes, or would like to suggest a possible black swan event, you can use our contact page or e-mail us at (nonprophetspod [at] gmail.com). If you enjoy this podcast, please rate us and recommend us to your friends.