EA - "Risk Awareness Moments" (Rams): A concept for thinking about AI governance interventions by oeg

The Nonlinear Library: EA Forum - Podcast készítő The Nonlinear Fund

Podcast artwork

Kategóriák:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Risk Awareness Moments" (Rams): A concept for thinking about AI governance interventions, published by oeg on April 14, 2023 on The Effective Altruism Forum.In this post, I introduce the concept of Risk Awareness Moments (“Rams”): “A point in time, after which concern about extreme risks from AI is so high among the relevant audiences that extreme measures to reduce these risks become possible, though not inevitable.” This is a blog post, not a research report, meaning it was produced quickly and is not to Rethink Priorities’ typical standards of substantiveness and careful checking for accuracy.SummaryI give several examples of what a Ram might look like for national elites and/or the general population of a major country. Causes could include failures of AI systems, or more social phenomena, such as new books being published about AI risk.I compare the Ram concept to similar concepts such as warning shots. I see two main benefits: (1) Rams let us remain agnostic about what types of evidence make people concerned, e.g., something that AI does, vs. social phenomena; (2) the concept lets us remain agnostic about the “trajectory” to people being concerned about the risk, e.g., whether there is a more discrete/continuous/lumpy change in opinion.For many audiences, and potential ways in which AI progress could play out, there will not necessarily be a Ram. For example, there might be a fast takeoff before the general public has a chance to significantly alter their beliefs about AI.We could do things to increase the likelihood of Rams, or to accelerate their occurrence. That said, there are complex considerations about whether actions to cause (earlier) Rams would be net positive.A Ram - even among influential audiences - is not sufficient for adequate risk-reduction measures to be put in place. For example, there could be bargaining failures between countries that make it impossible to get mutually beneficial AI safety agreements. Or people who are more aware of the risks from transformative AI might also be more aware of the benefits, and thus make an informed decision that the benefits are worth the risks by their lights.At the end, I give some historical examples of Rams for issues other than AI risk.From DALL-E 2DefinitionI define a Risk Awareness Moment (Ram) as “a point in time, after which concern about extreme risks from AI is so high among the relevant audiences that extreme measures to reduce these risks become possible, though not inevitable.”Extreme risks refers to risks at least at the level of global catastrophic risks (GCRs). I intend the term to capture accident, misuse, and structural risks.Note that people could be concerned about some extreme risks from AI without being concerned about other risks. For example, the general public might become worried about risks from non-robust narrow AI in nuclear weapons systems, without being worried about misaligned AGI. Concern about one risk would not necessarily make it possible to get measures that would be helpful for tackling other risks.Additionally, some audiences might have unreasonable threat models. One possible example of this would be an incorrect belief that accidents with Lethal Autonomous Weapons would themselves cause GCR-level damage. Similar to the bullet point above, this belief might be necessary for measures to tackle the specific (potentially overblown) threat model, without necessarily being helpful for measures to tackle other risks.Relevant audiences will differ according to the measure in question. For example, measures carried out by labs might require people in labs to be widely convinced. In contrast, government-led measures might require people in specific parts of the government - and maybe also the public - to be convinced.Extreme measures could include national-le...

Visit the podcast's native language site