“LLMs cannot usefully be moral patients” by LGS

EA Forum Podcast (All audio) - Podcast készítő EA Forum Team

For AI Welfare Debate Week, I thought I'd write up this post that's been juggling around in my head for a while. My thesis is simple: while LLMs may well be conscious (I'd have no way of knowing), there's nothing actionable we can do to further their welfare. Many people I respect seem to take the "anti-anti-LLM-welfare" position: they don't directly argue that LLMs can suffer, but they get conspicuously annoyed when other people say that LLMs clearly cannot suffer. This post is addressed to such people; I am arguing that LLMs cannot be moral patients in any useful sense and we can confidently ignore their welfare when making decisions. Janus's simulators You may have seen the LessWrong post by Janus about simulators. This was posted nearly two years ago, and I have yet to see anyone disagree with it. Janus calls LLMs "simulators": unlike hypothetical "oracle AIs" or [...] ---Outline:(00:46) Januss simulators(03:00) The role-player who never breaks character(04:37) Hypothetical future AIs--- First published: July 2nd, 2024 Source: https://forum.effectivealtruism.org/posts/dkHxf4YHGhB562pbk/llms-cannot-usefully-be-moral-patients --- Narrated by TYPE III AUDIO.

Visit the podcast's native language site