We start this episode (recorded 8/16/17) by announcing the Hybrid Forecasting Competition (HFC). Good Judgment is collaborating with IARPA to run a tournament designed to study whether human forecasters can work with machine systems to improve geopolitical forecasting. You can sign up to volunteer as a forecaster in the competition here.
We then interrupt the podcast briefly to explain that some of the podcast audio was lost due to what we believe was an attack by a rogue artificial intelligence named “Super Bert”.
Next we discuss to what extent we can believe the forecasts of AI forecasters. Robert argues that it can be hard even to know when trust human forecasters and recommends a recent Harvard Business Review interview with former US National Coordinator for Security, Infrastructure Protection and Counter-terrorism Richard Clarke on “Cassandras”, whose warnings of disasters aren’t believed. Atief makes the case that our unfamiliarity with the behavior of AI should it harder to trust AI when it gives surprising results.
We go on to talk about the “value-alignment problem” in AI design, Microsoft’s racist chatbot, Isaac Asimov’s three laws of robotics. We then return to the question of how much we can trust AI judgments but our discussion ends suddenly when Super Bert destroys the rest of our recording.
You can find NonProphets on Blubrry, iTunes, Stitcher and Google Play. As always, if you have questions, ideas for future episodes, or would like to suggest a possible black swan event, you can use our contact page or e-mail us at (nonprophetspod [at] gmail.com). If you enjoy this podcast, please rate us and recommend us to your friends.