Discussion about this post

User's avatar
Matt Ball's avatar

Given that 99+% of people’s “work” on AI accomplishes zero, then I would say “No,” given how much intense suffering there is that we can impact.

Expand full comment
Matt Reardon's avatar

It's unclear what the threshold for mattering is here. It seems trivial to ague that such work is “not worthless,” but RP readers are probably more concerned about whether their work is optimal on the margin, where these arguments are less compelling if you put substantial credence in TAI coming soon.

On the latter assumption, much of the case here rests on an unstated premise that second order effects of actions will dominate in the age of AI. I think making this argument explicitly would exposes its weakness. For example, in the factory farming case, it’s unclear (indeed, I think it’s implausible) that direct preference for animal-derived meat is what’s making those products such a large component of our food system. Rather it’s the more abstract values of taste, price, convenience favoring these products. If AI is not a monolith, it will uncover a wide range of potential products along these margins, from which the strongest will dominate. The possible exception here is if you achieve a moral revolution and people actively dis-prefer suffering-derived products. Again, if you flesh the argument out, it seems weak. There are few post-TAI worlds where animal derived products dominate on these dimensions and moral revolution scores quite poorly on tractability.

It seems better to focus on the first order effect of making AI corrigible or directly working on making AI benevolent, rather than ~hoping it picks up on your independent efforts.

Expand full comment
1 more comment...

No posts