EA - Can we evaluate the "tool versus agent" AGI prediction? by Ben West
The Nonlinear Library: EA Forum - Podcast készítő The Nonlinear Fund
Kategóriák:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can we evaluate the "tool versus agent" AGI prediction?, published by Ben West on April 8, 2023 on The Effective Altruism Forum.In 2012, Holden Karnofsky critiqued MIRI (then SI) by saying "SI appears to neglect the potentially important distinction between 'tool' and 'agent' AI." He particularly claimed:Is a tool-AGI possible? I believe that it is, and furthermore that it ought to be our default picture of how AGI will workI understand this to be the first introduction of the "tool versus agent" ontology, and it is a helpful (relatively) concrete prediction. Eliezer replied here, making the following summarized points (among others):Tool AI is nontrivialTool AI is not obviously the way AGI should or will be developedGwern more directly replied by saying:AIs limited to pure computation (Tool AIs) supporting humans, will be less intelligent, efficient, and economically valuable than more autonomous reinforcement-learning AIs (Agent AIs) who act on their own and meta-learn, because all problems are reinforcement-learning problems.11 years later, can we evaluate the accuracy of these predictions?Some Bayes points go to LW commenter shminux for saying that this Holden kid seems like he's going placesThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
