Future of Life Institute Podcast
Podcast készítő Future of Life Institute
230 Epizód
-
AIAP: AI Safety, Possible Minds, and Simulated Worlds with Roman Yampolskiy
Közzétéve: 2018. 07. 16. -
Mission AI - Giving a Global Voice to the AI Discussion With Charlie Oliver and Randi Williams
Közzétéve: 2018. 06. 29. -
AIAP: Astronomical Future Suffering and Superintelligence with Kaj Sotala
Közzétéve: 2018. 06. 14. -
Nuclear Dilemmas, From North Korea to Iran with Melissa Hanham and Dave Schmerler
Közzétéve: 2018. 05. 31. -
What are the odds of nuclear war? A conversation with Seth Baum and Robert de Neufville
Közzétéve: 2018. 04. 30. -
AIAP: Inverse Reinforcement Learning and Inferring Human Preferences with Dylan Hadfield-Menell
Közzétéve: 2018. 04. 25. -
Navigating AI Safety -- From Malicious Use to Accidents
Közzétéve: 2018. 03. 30. -
AI, Ethics And The Value Alignment Problem With Meia Chita-Tegmark And Lucas Perry
Közzétéve: 2018. 02. 28. -
Top AI Breakthroughs and Challenges of 2017
Közzétéve: 2018. 01. 31. -
Beneficial AI And Existential Hope In 2018
Közzétéve: 2017. 12. 21. -
Balancing the Risks of Future Technologies With Andrew Maynard and Jack Stilgoe
Közzétéve: 2017. 11. 30. -
AI Ethics, the Trolley Problem, and a Twitter Ghost Story with Joshua Greene And Iyad Rahwan
Közzétéve: 2017. 10. 31. -
80,000 Hours with Rob Wiblin and Brenton Mayer
Közzétéve: 2017. 09. 29. -
Life 3.0: Being Human in the Age of Artificial Intelligence with Max Tegmark
Közzétéve: 2017. 08. 29. -
The Art Of Predicting With Anthony Aguirre And Andrew Critch
Közzétéve: 2017. 07. 31. -
Banning Nuclear & Autonomous Weapons With Richard Moyes And Miriam Struyk
Közzétéve: 2017. 06. 30. -
Creative AI With Mark Riedl & Scientists Support A Nuclear Ban
Közzétéve: 2017. 06. 01. -
Climate Change With Brian Toon And Kevin Trenberth
Közzétéve: 2017. 04. 27. -
Law and Ethics of AI with Ryan Jenkins and Matt Scherer
Közzétéve: 2017. 03. 31. -
UN Nuclear Weapons Ban With Beatrice Fihn And Susi Snyder
Közzétéve: 2017. 02. 28.
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.