“Safety Conscious Researchers should leave Anthropic” by GideonF

EA Forum Podcast (All audio) - Podcast készítő EA Forum Team

It's time for safety conscious researchers to leave Anthropic This is a short post, due to an unfortunate lack of time (in more ways than one). I think it's pretty clear that AGI might come about rather (or indeed, very) soon. I don’t need to rehash on this forum why this might be amongst the most transformative events in history, and importantly, the catastrophic implications if this goes wrong. Whilst this community has often focused heavily on avoiding either misuse by ‘bad actors’ (often perceived to be terrorist groups) or misalignment risk, the risks of conflict or huge power concentration, as well as systemic risks cannot be ignored. Also, as the recent debate week showed, even supposedly ‘good’ actors might lock in their values, and if these are sufficiently different to the ‘true’ values, eg they fail to adequately value digital minds, for example, this may be [...] --- First published: April 1st, 2025 Source: https://forum.effectivealtruism.org/posts/srC24HLnzc8pAJsJx/safety-conscious-researchers-should-leave-anthropic --- Narrated by TYPE III AUDIO.

Visit the podcast's native language site