EA - Holden Karnofsky’s recent comments on FTX by Lizka

The Nonlinear Library: EA Forum - Podcast készítő The Nonlinear Fund

Podcast artwork

Kategóriák:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Holden Karnofsky’s recent comments on FTX, published by Lizka on March 24, 2023 on The Effective Altruism Forum.Holden Karnofsky has recently shared some reflections on EA and FTX, but they’re spread out and I’d guess that few people have seen them, so I thought it could be useful to collect them here. (In general, I think collections like this can be helpful and under-supplied.) I've copied some comments in full, and I've put together a simpler list of the links in this footnote.These comments come after a few months — there’s some explanation of why that is in this post and in this comment.Updates after FTXI found the following comment (a summary of updates he’s made after FTX) especially interesting (please note that I’m not sure I agree with everything):Here’s a followup with some reflections.Note that I discuss some takeaways and potential lessons learned in this interview.Here are some (somewhat redundant with the interview) things I feel like I’ve updated on in light of the FTX collapse and aftermath:The most obvious thing that’s changed is a tighter funding situation, which I addressed here.I’m generally more concerned about the dynamics I wrote about in EA is about maximization, and maximization is perilous. If I wrote that piece today, most of it would be the same, but the “Avoiding the pitfalls” section would be quite different (less reassuring/reassured). I’m not really sure what to do about these dynamics, i.e., how to reduce the risk that EA will encourage and attract perilous maximization, but a couple of possibilities:It looks to me like the community needs to beef up and improve investments in activities like “identifying and warning about bad actors in the community,” and I regret not taking a stronger hand in doing so to date. (Recent sexual harassment developments reinforce this point.).I’ve long wanted to try to write up a detailed intellectual case against what one might call “hard-core utilitarianism.” I think arguing about this sort of thing on the merits is probably the most promising way to reduce associated risks; EA isn’t (and I don’t want it to be) the kind of community where you can change what people operationally value just by saying you want it to change, and I think the intellectual case has to be made. I think there is a good substantive case for pluralism and moderation that could be better-explained and easier to find, and I’m thinking about how to make that happen (though I can’t promise to do so soon).I had some concerns about SBF and FTX, but I largely thought of the situation as not being my responsibility, as Open Philanthropy had no formal relationship to either. In hindsight, I wish I’d reasoned more like this: “This person is becoming very associated with effective altruism, so whether or not that’s due to anything I’ve done, it’s important to figure out whether that’s a bad thing and whether proactive distancing is needed.”I’m not surprised there are some bad actors in the EA community (I think bad actors exist in any community), but I’ve increased my picture of how much harm a small set of them can do, and hence I think it could be good for Open Philanthropy to become more conservative about funding and associating with people who might end up being bad actors (while recognizing that it won’t be able to predict perfectly on this front).Prior to the FTX collapse, I had been gradually updating toward feeling like Open Philanthropy should be less cautious with funding and other actions; quicker to trust our own intuitions and people who intuitively seemed to share our values; and generally less cautious. Some of this update was based on thinking that some folks associated with FTX were being successful with more self-trusting, less-cautious attitudes; some of it was based on seeing few immediate negative conse...

Visit the podcast's native language site