
Concerning AI | Existential Risk From Artificial Intelligence Podcast
1) 0070: We Don’t Get to Choose
Or do we? http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0070-2018-09-30.mp3
2) 0069: Will bias get us first?
Ted interviews Jacob Ward, former editor of Popular Science, journalist at many outlets. Jake’s article about the book he’s writing: Black Box Jake’s website JacobWard.com Implicit bias tests at Harva...Show More
3) 0068: Sanityland: More on Assassination Squads
Sane or insane?
4) 0067: The OpenAI Charter (and Assassination Squads)
We love the OpenAI Charter. This episode is an introduction to the document and gets pretty dark. Lots more to come on this topic!
5) 0066: The AI we have is not the AI we want
http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0066-2018-04-01.mp3
6) 0065: AGI Fire Alarm
There’s No Fire Alarm for Artificial General Intelligence by Eliezer Yudkowsky http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0065-2018-03-18.mp3
7) 0064: AI Go Foom
We discuss Intelligence Explosion Microeconomics by Eliezer Yudkowsky http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0064-2018-03-11.mp3
8) 0063: Ted’s Talk
Ted gave a live talk a few weeks ago.
9) 0062: There’s No Room at the Top
http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0062-2018-03-04.mp3
10) 0061: Collapse Will Save Us
Some believe civilization will collapse before the existential AI risk has a chance to play out. Are they right?