In this episode (recorded 9/27/17), we interview Dr. Shahar Avin of the University of Cambridge’s Centre for the Study of Existential Risk (CSER).
We discuss the prospects for the development of artificial general intelligence; why general intelligence might be harder to control than narrow intelligence; how we can forecast the development of new, unprecedented technologies; what the greatest threats to human survival are; the “value-alignment problem” and why developing AI might be dangerous; what form AI is likely to take; recursive self-improvement and “the singularity”; whether we can regulate or limit the development of AI; the prospect of an AI arms race; how AI could be used to be undermine political security; Open AI and the prospects for protective AI; tackling AI safety and control problems; why it matters what data is used to train AI; when will have self-driving cars; the potential benefits of AI; and why scientific research should be funded by lottery.
Learn about the related work Robert does with the Global Catastrophic Risk Institute. Papers and articles mentioned in this episode include Mike Rogers, “Artificial Intelligence—The Arms Race We May Not Be Able to Control”, Dario Amodei et al., “Concrete Problems in AI Safety”, and Paul Christiano et al., “Deep Reinforcement Learning from Human Preferences”. The Asilomar AI Principles are available here.
You can find NonProphets on Blubrry, iTunes, Stitcher and Google Play. As always, if you have questions, ideas for future episodes, or would like to suggest a possible black swan event, you can use our contact page or e-mail us at (nonprophetspod [at] gmail.com). If you enjoy this podcast, please rate us and recommend us to your friends.
In our last podcast of 2016 (recorded on 12/22/16), we talk for a second time to Global Catastrophic Risk Institute Executive Director Seth Baum (our first interview with Dr. Baum is here). We discuss Dr. Baum’s recent column in the Bulletin of the Atomic Scientists on the risks associated with a Trump presidency. We discussed whether Trump believes in Nixon’s “Madman Theory”of foreign policy, how much damage Trump could do to the global climate change regime, and what ordinary citizens could do to mitigate the risk of catastrophe.
We also look back at our holiday-season predictions. The Pantone 2017 color of the year was “greenery”, which was the color that Scott thought was second most likely. Robert managed to come within $5 million of the opening weekend domestic box office of Rogue One: A Star Wars Story. We finish up by talking about our holiday plans and also about vampires. Happy New Year and thanks for listening to NonProphets!
You can find NonProphets on iTunes here. As always, if you have questions, ideas for future episodes, or would like to suggest a possible black swan event, you can use the contact page or e-mail us at (nonprophetspod [at] gmail.com). And if you enjoy this podcast, please rate us on iTunes and recommend us to your friends.
In this episode we interview Dr. Seth Baum, the executive director and co-f0under of the Global Catastrophic Risk Institute about how to forecast the possibility of catastrophes that could threaten human civilization. We also talk briefly about the US election, discuss how to forecast a few technological questions, and consider whether Atief is likely to die on Mars. We end by previewing next week’s podcast, in which we will give our “best bets” on US election questions taken from political betting sites and submitted by listeners.
As always, if you have questions, ideas for future episodes, or would like to suggest a possible black swan event, use the contact page or e-mail us at (nonprophetspod [at] gmail.com). And if you enjoy this podcast, please rate us on iTunes and recommend us to your friends.