EA - How come there isn't that much focus in EA on research into whether / when AI's are likely to be sentient? by callum
The Nonlinear Library: EA Forum - Podcast készítő The Nonlinear Fund
Kategóriák:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How come there isn't that much focus in EA on research into whether / when AI's are likely to be sentient?, published by callum on April 27, 2023 on The Effective Altruism Forum.As far as I know, there isn't that much funding or research in EA on AI sentience (though there is some? e.g. this)I can imagine some answers:Very intractableAlignment is more immediately the core challenge, and widening the focus isn't usefulFunders have a working view that additional research is unlikely to affect (e.g. that AIs will eventually be sentient?)Longtermist focus is on AI as an X-risk, and the main framing there is on avoiding humans being wiped outBut it also seems important and action-relevant:Current framing of AI safety is about aligning with humanity, but making AI go well for AI's could be comparably / more importantNaively, if we knew AIs would be sentient, it might make 'prioritising AIs welfare in AI development' a much higher impact focus areaIt's an example of an area that won't necessarily attract resources / attention from commercial sources(I'm not at all familiar with the area of AI sentience and posted without much googling, so please excuse any naivety in the question!)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
