230 Epizód

  1. Why Ban Lethal Autonomous Weapons

    Közzétéve: 2019. 04. 03.
  2. AIAP: AI Alignment through Debate with Geoffrey Irving

    Közzétéve: 2019. 03. 07.
  3. Part 2: Anthrax, Agent Orange, and Yellow Rain With Matthew Meselson and Max Tegmark

    Közzétéve: 2019. 02. 28.
  4. Part 1: From DNA to Banning Biological Weapons With Matthew Meselson and Max Tegmark

    Közzétéve: 2019. 02. 28.
  5. AIAP: Human Cognition and the Nature of Intelligence with Joshua Greene

    Közzétéve: 2019. 02. 21.
  6. The Byzantine Generals' Problem, Poisoning, and Distributed Machine Learning with El Mahdi El Mhamdi

    Közzétéve: 2019. 02. 07.
  7. AI Breakthroughs and Challenges in 2018 with David Krueger and Roman Yampolskiy

    Közzétéve: 2019. 01. 31.
  8. Artificial Intelligence: American Attitudes and Trends with Baobao Zhang

    Közzétéve: 2019. 01. 25.
  9. AIAP: Cooperative Inverse Reinforcement Learning with Dylan Hadfield-Menell (Beneficial AGI 2019)

    Közzétéve: 2019. 01. 17.
  10. Existential Hope in 2019 and Beyond

    Közzétéve: 2018. 12. 21.
  11. AIAP: Inverse Reinforcement Learning and the State of AI Alignment with Rohin Shah

    Közzétéve: 2018. 12. 18.
  12. Governing Biotechnology: From Avian Flu to Genetically-Modified Babies With Catherine Rhodes

    Közzétéve: 2018. 11. 30.
  13. Avoiding the Worst of Climate Change with Alexander Verbeek and John Moorhead

    Közzétéve: 2018. 10. 31.
  14. AIAP: On Becoming a Moral Realist with Peter Singer

    Közzétéve: 2018. 10. 18.
  15. On the Future: An Interview with Martin Rees

    Közzétéve: 2018. 10. 11.
  16. AI and Nuclear Weapons - Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz

    Közzétéve: 2018. 09. 28.
  17. AIAP: Moral Uncertainty and the Path to AI Alignment with William MacAskill

    Közzétéve: 2018. 09. 18.
  18. AI: Global Governance, National Policy, and Public Trust with Allan Dafoe and Jessica Cussins

    Közzétéve: 2018. 08. 31.
  19. The Metaethics of Joy, Suffering, and Artificial Intelligence with Brian Tomasik and David Pearce

    Közzétéve: 2018. 08. 16.
  20. Six Experts Explain the Killer Robots Debate

    Közzétéve: 2018. 07. 31.

10 / 12

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Visit the podcast's native language site