“LLMs as a Planning Overhang” by Larks

EA Forum Podcast (All audio) - Podcast készítő EA Forum Team

It's quite possible someone has already argued this, but I thought I should share just in case not. Goal-Optimisers and Planner-Simulators When people in the past discussed worries about AI development, this was often about AI agents - AIs that had goals they were attempting to achieve, objective functions they were trying to maximise. At the beginning we would make fairly low-intelligence agents, which were not very good at achieving things, and then over time we would make them more and more intelligent. At some point around human-level they would start to take-off, because humans are approximately intelligent enough to self-improve, and this would be much easier in silicon. This does not seem to be exactly how things have turned out. We have AIs that are much better than humans at many things, such that if a human had these skills we would think they were extremely capable. And [...] ---Outline:(00:10) Goal-Optimisers and Planner-Simulators(01:17) What is the significance for existential risk?(02:43) How could we react to this?--- First published: July 14th, 2024 Source: https://forum.effectivealtruism.org/posts/CR7TjgsZgv8f5ST6h/llms-as-a-planning-overhang --- Narrated by TYPE III AUDIO.

Visit the podcast's native language site