The Fourth Age (Byron Reese Interview)

In this episode (recorded 5/2/18), we talk to Byron Reese about The Fourth Age, his fascinating new new book on how artificial intelligence and robots will change the world. We talk about what motivated the book; what “the Fourth Age” is; the role of ideas and culture in history; whether automation will cause technological unemployment; whether automation will increase inequality; what skills we’ll need in the future; whether technological changes will be socially and politically disruptive; why computing and automation hasn’t had a larger effect on economic measures of productivity; what access to the internet is worth; where fears about AI come from: what consciousness and free will are; why Byron is optimistic about the future.

Our interview with Shahar Avin of the Cambridge Centre for the Study of Existential Risk (CSER) about the prospects for the development of artificial intelligence is here.

You can find NonProphets on BlubrryiTunesStitcher and Google Play. As always, if you have questions, ideas for future episodes, or would like to suggest a possible black swan event, you can use our contact page or e-mail us at (nonprophetspod [at] gmail.com). If you enjoy this podcast, please rate us and recommend us to your friends.

Straight Outta Chumptown

In this episode (recorded 4/4/18), we talk about “the backfire effect”—the fact that people sometimes seem to hold ideas even more firmly after being confronted with evidence they are wrong.

We discuss Andrew Gelman’s skepticism of a New England Journal of Medicine report—which we discussed in the previous podcast—finding that firearm injuries in the US drop 20% while NRA members are attending national meetings; how skeptical we should be of research that confirms our preconceptions; “The Debunking Handbook” techniques for changing people’s minds; research into how much contrary information it takes to change people’s mind; whether culture or economics determines elections; how we avoid bias in forecasting and decision-making; and how we can stay out of Chumptown,

The four episodes of the You Are Not So Smart podcast on the “backfire effect” are here, here, here, and here.

You can find NonProphets on BlubrryiTunesStitcher and Google Play. As always, if you have questions, ideas for future episodes, or would like to suggest a possible black swan event, you can use our contact page or e-mail us at (nonprophetspod [at] gmail.com). If you enjoy this podcast, please rate us and recommend us to your friends.

Shahar Avin on Artificial Intelligence

In this episode (recorded 9/27/17), we interview Dr. Shahar Avin of the University of Cambridge’s Centre for the Study of Existential Risk (CSER).

We discuss the prospects for the development of artificial general intelligence; why general intelligence might be harder to control than narrow intelligence; how we can forecast the development of new, unprecedented technologies; what the greatest threats to human survival are; the “value-alignment problem” and why developing AI might be dangerous; what form AI is likely to take; recursive self-improvement and “the singularity”; whether we can regulate or limit the development of AI; the prospect of an AI arms race;  how AI could be used to be undermine political security; Open AI and the prospects for protective AI; tackling AI safety and control problems; why it matters what data is used to train AI; when will have self-driving cars; the potential benefits of AI; and why scientific research should be funded by lottery.

Learn about the related work Robert does with the Global Catastrophic Risk Institute. Papers and articles mentioned in this episode include Mike Rogers, “Artificial Intelligence—The Arms Race We May Not Be Able to Control”, Dario Amodei et al., “Concrete Problems in AI Safety”, and Paul Christiano et al., “Deep Reinforcement Learning from Human Preferences”. The Asilomar AI Principles are available here.

You can find NonProphets on BlubrryiTunesStitcher and Google Play. As always, if you have questions, ideas for future episodes, or would like to suggest a possible black swan event, you can use our contact page or e-mail us at (nonprophetspod [at] gmail.com). If you enjoy this podcast, please rate us and recommend us to your friends.

Welton Chang Interview

In this episode (recorded 9/20/17), we interview superforecaster Welton Chang. Welton is a Ph.D. candidate in psychology at Penn in Philip Tetlock and Barb Mellers’ Good Judgment Laboratory. He served in Iraq and South Korea as an intelligence officer in the US Army and as analyst for the Department of Defense.

We start by talking with Welton about how he got involved the Good Judgment Project and the Good Judgment Laboratory. Then we talk about uncertainty in forecasting and Vizzini’s Princess Bride conundrum, the value of algorithmic forecasts (which we also talked about on another recent podcast), the limits of modern warfare, and whether Kim Jong-un and Donald Trump are rational actors.

We go on to talk about designing training materials for Good Judgment Project forecasters, IARPA’s CREATE program on improving analytic reasoning, avoiding groupthink, the importance of diversity in forecasting, and the art and practice of applying Bayes’ Theorem. We close with a shout out to Welton’s rescue cats, Percy and Portia.

You can read Welton Chang’s essay on the Iran deal, “Go Set a Watchdog on Iran” here. You can read Evan Osnos’ “The Risk of Nuclear War With North Korea” here. You can follow Welton (@WeltonChang) on Twitter here. You can follow Welton’s cats (@percyandportia) on Instagram here.

You can find NonProphets on BlubrryiTunesStitcher and Google Play. Links to our other superforecaster interviews are here. As always, if you have questions, ideas for future episodes, or would like to suggest a possible black swan event, you can use our contact page or e-mail us at (nonprophetspod [at] gmail.com). If you enjoy this podcast, please rate us and recommend us to your friends.

Can We Trust AI Judgments?

We start this episode (recorded 8/16/17) by announcing the Hybrid Forecasting Competition (HFC). Good Judgment is collaborating with IARPA to run a tournament designed to study whether human forecasters can work with machine systems to improve geopolitical forecasting. You can sign up to volunteer as a forecaster in the competition here.

We then interrupt the podcast briefly to explain that some of the podcast audio was lost due to what we believe was an attack by a rogue artificial intelligence named “Super Bert”.

Next we discuss to what extent we can believe the forecasts of AI forecasters. Robert argues that it can be hard even to know when trust human forecasters and recommends a recent Harvard Business Review interview with former US National Coordinator for Security, Infrastructure Protection and Counter-terrorism Richard Clarke on “Cassandras”, whose warnings of disasters aren’t believed. Atief makes the case that our unfamiliarity with the behavior of AI should it harder to trust AI when it gives surprising results.

We go on to talk about the “value-alignment problem” in AI design, Microsoft’s racist chatbot, Isaac Asimov’s three laws of robotics. We then return to the question of how much we can trust AI judgments but our discussion ends suddenly when Super Bert destroys the rest of our recording.

You can find NonProphets on BlubrryiTunesStitcher and Google Play. As always, if you have questions, ideas for future episodes, or would like to suggest a possible black swan event, you can use our contact page or e-mail us at (nonprophetspod [at] gmail.com). If you enjoy this podcast, please rate us and recommend us to your friends.

Episode 7: Seth Baum on the Risk of Global Catastrophe

In this episode we interview Dr. Seth Baum, the executive director and co-f0under of the Global Catastrophic Risk Institute about how to forecast the possibility of catastrophes that could threaten human civilization. We also talk briefly about the US election, discuss how to forecast a few technological questions, and consider whether Atief is likely to die on Mars. We end by previewing next week’s podcast, in which we will give our “best bets” on US election questions taken from political betting sites and submitted by listeners.

As always, if you have questions, ideas for future episodes, or would like to suggest a possible black swan event, use the contact page or e-mail us at (nonprophetspod [at] gmail.com). And if you enjoy this podcast, please rate us on iTunes and recommend us to your friends.