“AI Moral Alignment: The Most Important Goal of Our Generation” by Ronen Bar

EA Forum Podcast (All audio) - Podcast készítő EA Forum Team

"Part one of our challenge is to solve the technical alignment problem, and that's what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link). In this post, I argue that: "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). Given the first point, our movement must attract more resources, talent, and funding to address it. [...] ---Outline:(01:34) The problem(01:37) What is Moral Alignment?(02:51) The Paradox of Human-Centric Alignment(04:28) Addressing a Counterargument(06:52) The Open Tractability Question(07:50) The Risk of Not Creating a Unified Moral Alignment Field(09:44) The Solutions(09:47) A Vision for the Moral Alignment Movement(10:54) Movement Goals(13:22) Theory Of Change(13:46) The Benevolent AI Imperative(14:52) Actions(14:55) Possible Interventions(15:55) Ways to Contribute to the Movement(17:07) Give Us Feedback(17:31) Next Posts I plan to write--- First published: March 26th, 2025 Source: https://forum.effectivealtruism.org/posts/4LimpA4pyLemxN4BF/ai-moral-alignment-the-most-important-goal-of-our-generation --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Visit the podcast's native language site