Jack Santa Barbara asks if we’ve created a ‘doomsday machine’ by bringing artificial intelligence to a growth economy
Comment: If you have the sense the world is fraying at the edges and things are getting worse faster than they are getting better, you wouldn’t be alone. Various scholars and organisations have coined terms to describe the multiple, unique, and serious, crises we are facing.
No less, a formidable organisation as the World Economic Forum talks about the polycrisis, defined as the confluence of several crises across different domains that reinforce each other. The list of individual crises is long and includes social, political, economic and ecological dynamics that are seriously challenging humanity’s wellbeing. The list includes everything from terrorist attacks to supply chain disruptions, to climate change and biodiversity loss, as well as cost-of-living challenges.
Other scholars talk about different versions of what they call the metacrisis. These discussions go beyond the list in the polycrisis and include ways we think about what is important to us, our ideologies, values, and ways of thinking.
Yet other scholars talk about the permacrisis (combining permanent and crisis),
“‘Permacrisis’ is a term that perfectly embodies the dizzying sense of lurching from one unprecedented event to another, as we wonder bleakly what new horrors might be around the corner.”
While including many of the same specific crises included in the other lists, these scholars point out that the complexity and interconnectedness of the many crises we face means none of them can be “solved” on their own, but only managed in some way to avoid the worst possible outcomes.
They also point out the potential benefits of crises as drivers of change, providing some hopeful perspectives to our unfortunate predicament.
There is broad agreement among these organisations and scholars on the fact that we currently face multiple threats, many unprecedented, across a broad range of issues. There is less agreement on which threats are the most serious. Climate change gets central billing in many approaches, whereas others focus on a spiritual crisis, or broader biosphere disruption, for example.
These various approaches to the multiple problems humanity faces not surprisingly offer different approaches to resolving or managing the risks we face. The World Economic Forum, for example, talks about modifying our economic model to develop in a circular fashion, where waste streams become resource streams for other economic activities. They suggest that a circular bioeconomy is a desirable goal, emphasising an increased focus on using biomass to provide products that can be easily recycled.
Other approaches put a greater emphasis on the need for new values and goals to direct our economic activities, and how we get along with each other. The wellbeing approach is one example.
One of the themes running through some, but not all, of these scholarly discussions is the notion that our economic model based on continuous generation of surplus is a significant driver of many of these crises. It is easy to see the connection between ecological crises, whether it’s climate change, biodiversity loss or pollution, and the “extract, use and dispose linear economy” which currently dominates. It’s also relatively easy to trace the impact of the unequal distribution of benefits from the current economic system to a variety of social problems, ranging from extreme poverty and distrust in governments to resource wars.
Some scholars point out that it is not economic growth per se that is the problem, but the level of energy and raw materials that the economy uses, which exceed biophysical limits. It is this excess energy and material throughput that disrupts planetary boundaries beyond their safe thresholds. These scholars call for massive reductions in material throughput to bring human activities back within biophysical limits. This is the de-growth approach.
Even those who do not accept material throughput in the economy as a major driver of the multiple risks we face, do acknowledge that reduced material throughput is desirable. Solar panels and wind turbines are offered as examples of ways to generate energy without the gross climate disruption that greenhouse gasses cause. They also emphasise redesigning products to make more efficient use of whatever materials go into making the products.
The point of this brief introduction to the many threats that seem to fester is that life has become increasingly precarious for many, and not only for the poorest of the poor, who always suffer most. Even the moderately affluent across the globe now suffer floods, droughts, fires and pandemics, as well as many of the other existential threats in the various crisis categories.
Furthermore, there is little consensus regarding the diagnoses and prescriptions for a healthy world. We don’t have agreement on either what the priority drivers of the crises are, nor what priority actions we should be taking. There is no united “we”, but rather a divided species with very different priorities and recommendations.
Enter Artificial Intelligence, AI. AI is a machine-based means of using enormous computing power to achieve a goal. The holy grail of AI is to be able to achieve any conceivable goal by the development of intelligence so vast that it would be beyond human comprehension. This holy grail is known as Artificial General Intelligence, or AGI, capable of solving any problem of any nature.
Fortunately, AGI does not yet exist. What we have now is narrow AI, capable of achieving a specific goal, such as winning at chess, translating languages, writing essays on specific topics, or creating images. One of the remarkable things about even narrow AI is that it can learn things on its own.
The early version of narrow chess-playing AI had access to all the games ever played by all the master chess players. So it was easy for it to beat any newcomer. But more advanced chess-playing AI literally taught itself how to play. Given only the instructions of how to play and the goal, two chess-playing AIs literally played over a trillion games against each other in just a few hours. They quickly not only learned how to win, but also developed never-before-seen strategies for winning. A narrow go-playing AI also taught itself how to win this game, which is even more complex than chess.
There is no doubt that narrow AI is an incredibly powerful entity that can contribute to many human endeavours. Medical diagnoses are but one obvious example. Designing individualised learning routines would be another. There are many more.
But there is a significant risk associated with AI that has received little attention to date. To understand this risk we need to appreciate that the two major developers of significant AI are private corporations focusing on profits, and governments focusing on military applications. Each application presents its own unique risks.
For-profit corporations by law are required to maximise profits for their shareholders. AI will provide corporations with powerful and unique opportunities to expand profits, with revenues from AI projected to be over $1 trillion by 2030. This will almost certainly involve selling more products or services, as well as involve increased use of energy and material resources to produce those products and services. Whether we actually need all the new products AI is likely to develop is rarely discussed. We leave it to the market, and a global trillion dollar a year marketing and advertising budget, to decide.
To date, corporations have increased profits in a variety of ways, many of which did not involve innovations or product enhancements. They involved obstructing unions, externalising costs, destroying ecosystems, and displacing poor communities. Externalising costs often involves distorting the democratic process by unduly influencing policy makers to grant special privileges to the corporate interests.
The risk of corporate use of narrow AI is thus more ecological and social damage, as well as further corruption of the democratic process. Whether appropriate social controls to avoid these risks can be implemented quickly enough is highly uncertain. Expansion of AI is occurring rapidly. The enabling power of AI at the corporate level clearly needs more serious attention.
Some of the risks associated with military applications of AI are already in the public discourse. It is the ecological and social implications associated with the corporate application that has received less attention. AI is a powerful new entity that will accelerate the harm our profit-oriented growth economy is already doing to the biosphere. And this will occur at a time when there is increasing urgency to dramatically reduce the disruption we are causing as we ever more rapidly approach irreversible tipping points.
Accelerating biosphere disruption is only one of many risks that AI is likely to cause. But it is arguably one of the most important to address. Without restoring and maintaining biosphere integrity, many of the other risks become irrelevant. We won’t have an economy, or a civilisation, without biosphere integrity.
How to best avoid this risk? Perhaps some human wisdom would allow us to step back and look at what is driving our current multiple crises, and prioritise values that preserve biosphere integrity and human wellbeing within the web of life. Our narrow focus on profits and political dominance is blinding us to the enormity of the risks we face. AI, and especially AGI if it ever materialises, is a significant risk multiplier not unlike nuclear weapons (but more difficult to regulate).
How to manage this challenge is not clear. If you’ve never engaged in social or political activism, perhaps this is the time to join with others in a worthy cause. It’s going to require all our skills and wisdom to deal with these crises, no matter what we call them.