EA - AI Governance Needs Technical Work by Mauricio
The Nonlinear Library: EA Forum - Podcast készítő The Nonlinear Fund
Kategóriák:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Governance Needs Technical Work, published by Mauricio on September 5, 2022 on The Effective Altruism Forum. Summary and introduction People who want to improve the trajectory of AI sometimes think their options for object-level work are (i) technical safety work and (ii) non-technical governance work. But that list misses things; another group of arguably promising options is technical work in AI governance, i.e. technical work that mainly boosts AI governance interventions. This post provides a brief overview of some ways to do this work—what they are, why they might be valuable, and what you can do if you’re interested. I discuss: Engineering technical levers to make AI coordination/regulation enforceable (through hardware engineering, software/ML engineering, and heat/electromagnetism-related engineering) Information security Forecasting AI development Technical standards development Grantmaking or management to get others to do the above well Advising on the above Other work Acknowledgements Thanks to Lennart Heim, Jamie Bernardi, Luke Muehlhauser, Gabriel Mukobi, Girish Sastry, and an employee at Schmidt Futures for their feedback on this post. Mistakes are my own. This post is mostly informed by various conversations with AI governance researchers, as well as earlier writings on specific kinds of technical work in AI governance. Context What I mean by “technical work in AI governance” I’m talking about work that: Is technical (e.g. hardware/ML engineering) or draws heavily on technical expertise; and Contributes to AI’s trajectory mainly by improving the chances that AI governance interventions succeed[1] (as opposed to by making progress on technical safety problems or building up the communities concerned with these problems). Neglectedness As of writing, there are (by one involved expert’s estimate) ~8-15 full-time equivalents doing this work with a focus on especially large-scale AI risks.[2] Personal fit For you to have a strong personal fit for this type of work, technical skills are useful, of course (including but not necessarily in ML), and interest in the intersection of technical work and governance interventions presumably makes this work more exciting for someone. Also, whatever it takes to make progress on mostly uncharted problems in a tiny sub-field[3] is probably pretty important for this work now, since that’s the current nature of these fields. That might change in a few years. (But that doesn’t necessarily mean you should wait; time’s ticking, someone has to do this early-stage thinking, and maybe it could be you.) What I’m not saying I’m of course not saying this is the only or main type of work that’s needed. (Still, it does seem particularly promising for technically skilled people, especially under the debatable assumption that governance interventions tend to be more high-leverage than direct work on technical safety problems.) Types of technical work in AI governance Engineering technical levers to make AI coordination/regulation enforceable To help ensure AI goes well, we may need good coordination and/or regulation.[4] To bring about good coordination/regulation on AI, we need politically acceptable methods of enforcing them (i.e. catching and penalizing/stopping violators).[5] And to design politically acceptable methods of enforcement, we need various kinds of engineers, as discussed in the next several sections.[6] Hardware engineering for enabling AI coordination/regulation To help enforce AI coordination/regulation, it might be possible to create certain on-chip devices for AI-specialized chips or other devices at data centers. As a non-exhaustive list of speculative examples: Devices on network switches that identify especially large training runs could be helpful. They could help enforce regulations that apply only to trai...
