“titotal on AI risk scepticism” by Vasco Grilo
EA Forum Podcast (All audio) - Podcast készítő EA Forum Team
Kategóriák:
This is a link post. This is a linkpost for titotal's posts on AI risk scepticism, which I think are great. I list the posts below chronologically. Chaining the evil genie: why "outer" AI safety is probably easy Conclusion Summing up my argument in TLDR format: For each AGI, there will be tasks that have difficulty beyond it's capabilities. You can make the task “subjugate humanity under these constraints” arbitrarily more difficult or undesirable by adding more and more constraints to a goal function. A lot of these constraints are quite simple, but drastically effective, such as implementing time limits, bounded goals, and prohibitions on human death. Therefore, it is not very difficult to design a useful goal function that raises subjugation difficulty above the capability level of the AGI, simply by adding arbitrarily many constraints. Even if you disagree with some of these points, it seems hard to [...] ---Outline:(00:16) Chaining the evil genie: why outer AI safety is probably easy(01:24) AGI Battle Royale: Why “slow takeover” scenarios devolve into a chaotic multi-AGI fight to the death(03:34) How AGI could end up being many different specialized AIs stitched together(04:38) Why AGI systems will not be fanatical maximisers (unless trained by fanatical humans)(06:25) Bandgaps, Brains, and Bioweapons: The limitations of computational science and what it means for AGI(08:01) The bullseye framework: My case against AI doom(09:36) Diamondoid bacteria nanobots: deadly threat or dead-end? A nanotech investigation(11:06) The Leeroy Jenkins principle: How faulty AI could guarantee warning shots--- First published: May 30th, 2024 Source: https://forum.effectivealtruism.org/posts/yfmKnyd3uThq9Dd2c/titotal-on-ai-risk-scepticism --- Narrated by TYPE III AUDIO.