Is it pointless to work on anything but AI?
Why your non-AI work may still matter in an AI-dominated future
Author: Hayley Clatterbuck
Investments in AI are accelerating at an incredible pace (see graphic below), with many people predicting that transformative AI (e.g., AI that can perform most tasks that humans can do or superintelligent AI that surpasses human abilities) will arrive within the next decade, and maybe sooner. This is happening amidst, and despite, enormous uncertainty about whether this would be a good or a bad thing for the world. Speculations about AI’s impacts abound: AI might usher in prosperity and abundance, or it might cause unprecedented unemployment. It might solve our most pressing problems, like climate change and global poverty, by rapidly increasing the rate of economic growth and technological progress. Or it might enact our worst sci-fi nightmares and kill us all. The one thing that many people seem to increasingly agree upon is this: Transformative AI is coming soon, it’s going to change everything, and the fate of the future depends on whether we get it right.
If that’s right, then for those of us who are concerned with the future going as well as it can, making sure that a future with AI is a good one (at the very least, that there is a future with AI) should be a top priority.
Note: Quarterly investments by top AI developing firms.
Source: Data and image from the Wall Street Journal, accessed through Derek Thompson’s Substack
In the past few years, we’ve heard an even more extreme claim: In the face of an AI revolution, if you’re working to make the future better (e.g., in policy or high-impact nonprofits), working on anything but AI safety is pointless. And if you want to dedicate your time to other projects, you’d better make sure they pay off in the very short term, before AI arrives. Now, suppose you’ve devoted yourself to helping the global poor, mitigating climate impacts, or improving farmed animal welfare. Does impending AI mean you should stop what you’re doing?
Not necessarily.
Why might AI make your projects pointless?
Here are three versions of the argument for why present projects are rendered pointless by an approaching AI revolution:
The optimistic view: AI will solve it! That work you’re doing now? Superintelligent AI will be able to do it better, cheaper, and faster than you can. In fact, it’s going to solve the problems that you’re so worried about. So stop wasting your time and money!
The pessimistic view: There’s a large chance that AI will kill us all! That work you’re doing now? It’s not going to matter because we might all be dead (or worse). So either work on reducing AI risk or enjoy the time you have left!
The uncertain view: AI is going to disrupt everything. We have no idea what’s going to happen, but it’s a good bet that tomorrow’s problems and solutions are going to be very different from today’s. That work you’re doing now? We have no idea whether it will make any sense after AI. So just wait, save up your resources, and come up with new plans in the new AI landscape.
We aren’t certain that any of these scenarios has to come true. The AI transition may be slower, less transformative, and more predictable than they say it will be. But even if AI does cause radical and rapid changes, some of the non-AI work we do today will still be important in shaping a post-AI future. Indeed, it might be even more important if radical AI is coming.
Why AI won’t solve everything
Superhuman AI will allow us to throw a lot more intelligence at the problems we currently face. However, while intelligence goes a long way, it’s not always enough. And when intelligence alone isn’t enough, we should expect that human work will still be crucial.
One kind of important goal that is not entirely solved by intelligence is ensuring that AI has the right values. What this means in practice is sometimes controversial, due to widespread moral disagreement, as we have seen in debates over LLM design. At a minimum, however, we want to make sure that we can instill systems of value that won’t lead to human destruction.
Another goal is to bring about the social and legal changes necessary to implement innovations accelerated by AI and respond to risks it might create. Current projects advocating for animals, the poor, regulatory reforms, and better state capacity, or reducing risks of other catastrophes, can make essential contributions to these goals.
We can think about these issues by examining a test case: Will AI solve factory farming? First, we need to make sure that AI will want to end factory farming. If an AI learns to act like humans do now, then it’s unlikely to favor an end to factory farming. It’s also important to note that AI is not a monolith, and not (yet) an independent actor. Factory farming’s proponents will also be using AI to enhance their effectiveness. AI might deliver us helpful innovations, such as lab-grown meat that is much cheaper than the real thing. However, so far, people have been hesitant to switch to alternative meat products, and lab-grown meat has already been made illegal in places like Florida and Alabama. Without cultural and legal changes, innovation alone won’t solve factory farming. Therefore, advocacy work to change hearts, minds, and laws—standard kinds of projects in the animal welfare field today—will still be essential.
It may sometimes be hard to predict which problems AI will be able to solve, especially since we don’t know what future AI systems will be like. As a first attempt, a useful heuristic might be to ask: How many dimensions do we need to get right to be successful, and will AI be effective on those dimensions? Currently, we are most confident in AI’s ability to overcome innovation challenges. AI will likely yield efficiencies that will solve problems on economic dimensions as well. AI’s prospects for spurring cultural and political change in desirable directions are much more uncertain. AI already boasts a Nobel Prize in chemistry. Its prospects for a Nobel Peace Prize are much murkier.
Note: Example of a dimensional analysis of sample causes, mapped against expected AI capabilities on those dimensions.
The lesson is that if we want AI to make our world better, we need to do a lot of work to steer it in the right direction. Some of this work has little to do with AI directly, like research in ethics, political advocacy, and building processes to implement innovations toward good ends. The potential importance of these projects shows why it may be a mistake to only focus on projects whose impacts come in the very near term, before transformative AI arrives. We should be looking for projects designed to build “levers that reach across the paradigm shift” (as these authors aptly put it). For example, if today’s animal advocacy makes AI more animal-friendly, and AI futures without this work would have contained far more factory-farmed animals than today, then working on animal advocacy is more important with the pending AI transition, not less.
Why the threat of AI catastrophe shouldn’t paralyze us
The pessimistic argument goes: if there’s a high enough chance that AI causes an existential catastrophe, then nothing we do today matters in the long term. So perhaps we should drop everything and work full-time on making sure AI doesn’t kill us.
One reason for skepticism about the optimistic argument also applies here. Some of the projects that don’t seem to have much to do with AI might be critically important in securing AI safety. Strengthening democracy may prevent authoritarian capture of AI. Investing in biosecurity infrastructure would support surveillance of new AI threats. Promoting international cooperation would help us mount a coordinated global response in the event of an AI crisis. Philosophical work may develop ethical frameworks that would guide AI in directions less likely to result in harm to humans. If you truly fear an AI-related catastrophe, one wise strategy is to strengthen society on all other fronts–so that if we face AI upheaval, we do so as a healthier, wealthier, more unified world. There are also many doom scenarios where AI doesn’t lead to human extinction but rather traps us in miserable futures (e.g., worlds with entrenched totalitarianism or vastly expanded factory farming). The work to avoid these futures becomes more important with impending AI, not less.
Few pessimists think that AI doom is certain. Suppose you think we have a 10% chance of extinction from AI this century. In the 90% of worlds where we avoid AI x-risk, work in animal welfare and global health will still have been important (and even if AI doom arrives, work we did in the meantime to prevent suffering still matters). We have been living under the threat of existential catastrophe due to nuclear war or other disasters for a long time. This reality has motivated many people to work harder on avoiding truly catastrophic events. But thankfully, the world didn’t let these threats stop us from moving forward on other fronts, too. Even if the challenges from AI are greater than we’ve ever seen, the chance of annihilation shouldn’t paralyze us.
Why uncertainty shouldn’t paralyze us either
Consider the “uncertain view” argument: Imagine a near-future world populated by billions of superintelligent AIs that can do most things that humans can do, but better, faster, and cheaper. What is this world going to be like? Are the normal societal structures still in place? Are the economic incentives the same? How did humans respond culturally and politically? At any point, the space of important problems and effective solutions is shaped by the environment (technological, social, economic, etc.). If this environment changes radically, how can we be sure that today’s most promising solutions to the world’s problems will still make sense tomorrow?
This is probably the most persuasive of the three arguments. Instead of acting now to improve the world, we always have the option of saving our resources and waiting to see what happens during an AI revolution. Still, we don’t think that everyone (outside of AI) should stop what they’re doing.
For one, timelines matter. If the AI revolution happens next month and the dust settles within the year, then waiting has little cost. But if the AI revolution is a long and protracted process, we risk passing up important opportunities to do good in the meantime. Pausing work in other areas can cause them to leech talent, expertise, and money, which would prevent us from just picking things up where we left off post-AI transition.
While many things will change with AI, some things probably won’t. While some nations’ economies may change rapidly, it’s less likely that developing countries will be quickly or radically reshaped by AI, leaving many aspects of global development work unchanged. Some research areas, like human psychology and moral philosophy, will take on extra importance during the turbulent AI transition. For example, we might need to predict how humans will react to changes or how social structures will evolve, or we might want to design interventions to alleviate ill effects or incentivize beneficial behaviors. Even for domains that will change, more knowledge today will equip us to more quickly understand the changes that AI brings about.
We should take AI seriously
We have argued that AI doesn’t necessarily make all other projects pointless. Here’s what we haven’t argued. We don’t claim that everybody should just keep doing what they’re doing. Some projects really will be rendered moot by AI (e.g., some research tasks that will be performed by near-term AIs). When picking new projects, it’s worth asking: Is this something AI could do better in the near future? Does this project’s path to impact only make sense in a non-AI world? Some people’s projects might not necessarily be undermined by AI, but their impact might be dwarfed by the greater chance to do good by working squarely on AI. Others will work on AI via more indirect routes.
What we’re calling for is more careful cross-cause prioritization by evaluating individual projects on their own merits, rather than just on the basis of which bucket they fall into. A diversified portfolio of causes is, among other things, a hedge against our uncertainty about AI timelines, trajectories, and tractability. This diversity also preserves valuable domain expertise and institutional knowledge, which will be crucial for navigating a post-AI transition world.
For a deeper dive into this topic, you can find an expanded version of this post here.
Acknowledgments
Thank you to Noah Birnbaum, Marcus Davis, Oscar Delaney, Laura Duffy, Bob Fischer, Arvo Muñoz Morán, David Moss, and Derek Shiller for helpful feedback on this post.
Thank you!
Thank you for taking the time to read our substack. We will continue to provide this content on a regular basis. If you would like to support our efforts, please subscribe below or share our posts with friends and colleagues.
We’re also always looking for feedback on our work. You can share your thoughts about this publication anonymously or simply reply to this email/post with suggestions for improvement or any questions.
By default, we’re sharing this substack via email with Rethink Priorities newsletter members. Please feel free to unsubscribe from this substack if you’d prefer to stick with our monthly, general newsletter.
Given that 99+% of people’s “work” on AI accomplishes zero, then I would say “No,” given how much intense suffering there is that we can impact.
It's unclear what the threshold for mattering is here. It seems trivial to ague that such work is “not worthless,” but RP readers are probably more concerned about whether their work is optimal on the margin, where these arguments are less compelling if you put substantial credence in TAI coming soon.
On the latter assumption, much of the case here rests on an unstated premise that second order effects of actions will dominate in the age of AI. I think making this argument explicitly would exposes its weakness. For example, in the factory farming case, it’s unclear (indeed, I think it’s implausible) that direct preference for animal-derived meat is what’s making those products such a large component of our food system. Rather it’s the more abstract values of taste, price, convenience favoring these products. If AI is not a monolith, it will uncover a wide range of potential products along these margins, from which the strongest will dominate. The possible exception here is if you achieve a moral revolution and people actively dis-prefer suffering-derived products. Again, if you flesh the argument out, it seems weak. There are few post-TAI worlds where animal derived products dominate on these dimensions and moral revolution scores quite poorly on tractability.
It seems better to focus on the first order effect of making AI corrigible or directly working on making AI benevolent, rather than ~hoping it picks up on your independent efforts.