EA - 13 background claims about EA by Akash
The Nonlinear Library: EA Forum - Podcast készítő The Nonlinear Fund
Kategóriák:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 13 background claims about EA, published by Akash on September 7, 2022 on The Effective Altruism Forum. I recently attended EAGxSingapore. In 1-1s, I realized that I have picked up a lot of information from living in an EA hub and surrounding myself with highly-involved EAs. In this post, I explicitly lay out some of this information. I hope that it will be useful for people who are new to EA or people who are not living an EA Hub. Here are some things that I believe to be important “background claims” that often guide EA decision-making, strategy, and career decisions. (In parentheses, I add things that I believe, but these are "Akash's opinions" as opposed to "background claims.") Note that this perspective is based largely on my experiences around longtermists & the Berkeley AI safety community. General 1. Many of the most influential EA leaders believe that there is a >10% chance that humanity goes extinct in the next 100 years. (Several of them have stronger beliefs, like a 50% of extinction in the next 10-30 years). 2. Many EA leaders are primarily concerned about AI safety (and to a lesser extent, other threats to humanity’s long-term future). Several believe that artificial general intelligence is likely to be developed in the next 10-50 years. Much of the value of the present/future will be shaped by the extent to which these systems are aligned with human values. 3. Many of the most important discussions, research, and debates are happening in-person in major EA hubs. (I claim that visiting an EA Hub is one of the best ways to understand what’s going on, engage in meaningful debates about cause prioritization, and receive feedback on your plans.) 4. Several “EA organizations” are not doing highly impactful work, and there are major differences in impact between & within orgs. Some people find it politically/socially incorrect to point out publicly which organizations are failing & why. (I claim people who are trying to use their careers in a valuable way should evaluate organizations/opportunities for themselves, and they should not assume that generically joining an “EA org” is the best strategy.) AI Safety 5. Many AI safety researchers and organizations are making decisions on relatively short AI timelines (e.g., artificial general intelligence within the next 10-50 years). Career plans or research proposals that take a long time to generate value are considered infeasible. (I claim that people should think about ways to make their current trajectory radically faster— e.g., if someone is an undergraduate planning a CS PhD, they may want to consider alternative ways to get research expertise more quickly). 6. There is widespread disagreement in AI safety about which research agendas are promising, what the core problems in AI alignment are, and how people should get started in AI safety. 7. There are several programs designed to help people get started in AI safety. Examples include SERI-Mats (for alignment research & theory), MLAB (for ML engineering), the ML Safety Scholars Program (for ML skills), AGI Safety Fundamentals (for AI alignment knowledge), PIBBS (for social scientists), and the newly-announced Philosophy Fellowship. (I suggest people keep point #6 in mind, though, and not assume that everything they need to know is captured in a well-packaged Program or Reading List). 8. There are not many senior AIS researchers or AIS mentors, and the ones who exist are often busy. (I claim that the best way to “get started in AI safety research” is to apply for a grant to spend ~1 month reading research, understanding the core parts of the alignment problem, evaluating research agendas, writing about what you’ve learned, and visiting an EA hub). 9. People can apply for grants to skill-up in AI safety. You do not have to propose an extremely specific project...
