“Safety-concerned EAs should prioritize AI governance over alignment” by sammyboiz
EA Forum Podcast (All audio) - Podcast készítő EA Forum Team
Kategóriák:
Excluding the fact that EAs tend to be more tech-savvy and their advantage lies in technical work such as alignment, the community as a whole is not prioritizing advocacy and governance enough. Effective Altruists over-prioritize working on AI alignment over AI regulation advocacy. I disagree with prioritizing alignment because much of alignment research is simultaneously capabilities research (Connor Leahy even begged people to stop publishing interpretability research). Consequently, alignment research is accelerating the timelines toward AGI. Another problem with alignment research is that cutting-edge models are only available at frontier AI labs, meaning there is comparatively less that someone on the outside can help with. Finally, even if an independent alignment researcher finds a safeguard to a particular AGI risk, the target audience AI lab might not implement it since it would cost time and effort. This is due to the "race to the bottom," a governance problem. Even [...] --- First published: June 11th, 2024 Source: https://forum.effectivealtruism.org/posts/Jz9ypXAR2TnWDLpGE/safety-concerned-eas-should-prioritize-ai-governance-over --- Narrated by TYPE III AUDIO.