“How do we solve the alignment problem?” by Joe_Carlsmith
EA Forum Podcast (All audio) - Podcast készítő EA Forum Team

Kategóriák:
This is a link post. (Audio version here (read by the author), or search for "Joe Carlsmith Audio" on your podcast app.) We want the benefits that superintelligent AI agents could create. And some people are trying hard to build such agents. I expect efforts like this to succeed – and maybe, very soon. But superintelligent AI agents might also be difficult to control. They are, to us, as adults to children, except much more so. In the same direction, relative to us, as advanced aliens; as demi-gods; as humans relative to ants. If such agents “go rogue” – if they start ignoring human instructions, resisting correction or shut-down, trying to escape from their operating environment, seeking unauthorized resources and other forms of power, etc – we might not be able to stop them. Worse, because power/resources/freedom/survival etc are useful for many goals, superintelligent agents with a variety of [...] The original text contained 4 footnotes which were omitted from this narration. --- First published: February 13th, 2025 Source: https://forum.effectivealtruism.org/posts/xApddQBLzJocoBcmF/how-do-we-solve-the-alignment-problem --- Narrated by TYPE III AUDIO.