“Carl Shulman on the moral status of current and future AI systems” by rgb
EA Forum Podcast (All audio) - Podcast készítő EA Forum Team
Kategóriák:
This is a link post. In which I curate and relate great takes from 80k As artificial intelligence advances, we’ll increasingly urgently face the question of whether and how we ought to take into account the well-being and interests of AI systems themselves. In other words, we’ll face the question of whether AI systems have moral status.[1] In a recent episode of the 80,000 Hours podcast, polymath researcher and world-model-builder Carl Shulman spoke at length about the moral status of AI systems, now and in the future. Carl has previously written about these issues in Sharing the World with Digital Minds and Propositions Concerning Digital Minds and Society, both co-authored with Nick Bostrom. This post highlights and comments on ten key ideas from Shulman's discussion with 80,000 Hours host Rob Wiblin. 1. The moral status of AI systems is, and will be, an important issue (and it might not have [...] ---Outline:(00:59) 1. The moral status of AI systems is, and will be, an important issue (and it might not have much do with AI consciousness)(02:47) 2. While people have doubts about the moral status of AI current systems, they will attribute moral status to AI more and more as AI advances.(05:41) 3. Many AI systems are likely to say that they have moral status (or might be conflicted about it).(08:08) 4. People may appeal to theories of consciousness to deny that AI systems have moral status, but these denials will become less and less compelling as AI progresses.(10:48) 5. Even though these issues are difficult now, we won’t remain persistently confused about AI moral status. AI advances will help us understand these issues better.(14:17) 6. But we may still struggle some with the indeterminacy of our concepts and values as they are applied to different AI systems.(15:18) 7. A strong precautionary principle against harming AIs seems like it would ban AI research as we know it.(16:39) 8. Advocacy about AI welfare seems premature; the best interventions right now involve gaining more understanding.(17:29) 9. Takeover by misaligned AI could be bad for AI welfare, because AI systems can dominate and mistreat other AI systems.(19:43) 10. No one has a plan for ensuring the “bare minimum of respect” for AI systems.The original text contained 5 footnotes which were omitted from this narration. --- First published: July 1st, 2024 Source: https://forum.effectivealtruism.org/posts/9rvLquXSvdRjnCMvK/carl-shulman-on-the-moral-status-of-current-and-future-ai --- Narrated by TYPE III AUDIO.