“Ten arguments that AI is an existential risk” by Katja_Grace, Nathan Young

EA Forum Podcast (All audio) - Podcast készítő EA Forum Team

Crossposted from the AI Impacts Blog. This is a snapshot of a new page on the AI Impacts Wiki. We’ve made a list of arguments[1] that AI poses an existential risk to humanity. We’d love to hear how you feel about them in the comments, or in the poll on the LessWrong crosspost version. Competent non-aligned agentsHumans increasingly lose games to the best AI systems. If AI systems become similarly adept at navigating the real world, will humans also lose out? (Image: Midjourney) Summary: Humans will build AI systems that are 'agents', i.e. they will autonomously pursue goals Humans won’t figure out how to make systems with goals that are compatible with human welfare and realizing human values Such systems will be built or selected to be highly competent, and so gain the power to achieve their goals Thus the future will be primarily controlled by AIs [...] ---Outline:(00:25) Competent non-aligned agents(02:01) Second species argument(03:52) Loss of control via inferiority(05:30) Loss of control via speed(07:04) Human non-alignment(08:27) Catastrophic tools(10:00) Powerful black boxes(11:35) Multi-agent dynamics(12:49) Large impacts(14:19) Expert opinionThe original text contained 4 footnotes which were omitted from this narration. --- First published: August 14th, 2024 Source: https://forum.effectivealtruism.org/posts/WGNufK8TofHRSji9c/ten-arguments-that-ai-is-an-existential-risk --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Visit the podcast's native language site