Source: Daniel Schmachtenberger & Liv Boeree on AI, Capitalism, Misalignment & Moloch
Link: https://www.youtube.com/watch?v=KCSsKV5F4xc
Date: March 2023
Executive Summary
This document analyzes a conversation exploring the concept of “Moloch,” a metaphor for negative-sum games driven by misaligned incentives, and its relationship to the “meta-crisis” of increasing global catastrophic risks. It argues that our current global systems, particularly capitalism, act as a kind of misaligned superintelligence, accelerating these risks. Furthermore, the conversation delves into how AI development, far from being a solution, risks exacerbating these problems if not carefully aligned with positive-sum goals. The briefing emphasizes the urgency for a shift in focus towards risk mitigation, international cooperation, and a re-evaluation of the values driving technological development.
1. The Moloch Dynamic: A God of Negative-Sum Games
Definition
Moloch is defined as “the god of negative-sum games,” representing “unhealthy competitive situations” arising from “a system of bad incentives that incentivize agents within that system to act in a way that is bad for the whole.” (4:22, 4:28)
Examples
- Beauty Filters: Social media filters that create an unhealthy standard, making people hate their natural faces (5:31-6:43)
- Climate Change: Countries prioritizing GDP growth by externalizing pollution costs, resulting in a “tragedy of the commons” (7:20-7:41)
- Arms Races: Nations developing weapons, even if they don’t want to, due to the fear of being vulnerable. (7:48-8:56)
- Environmental Issues: Various issues like overfishing, deforestation and plastic waste, where no one wants the destruction, but everyone contributes because not doing so would put them at a disadvantage. (9:02-12:17)
Key Characteristics
- Lack of Trust: “the inability for trust and coordination” (10:26) results in the “race to the bottom”
- Near-Term Incentives: Actors are driven by short-term gains that degrade the long term collective good. (9:18)
- Externalization of Costs: Costs are pushed onto others while benefits are internalized (7:20-7:41, 11:53-12:11)
2. The Meta-Crisis: A Convergence of Global Catastrophic Risks
Definition
The meta-crisis is “a unique time in history where there are an increasing number of global catastrophic risks with increasing probabilities.” (13:21, 13:34)
Novelty
While civilizations have faced collapses, “we are for the first time facing that in a global way” due to globalization, technological capacity and “planetary boundaries.” (14:02-15:13)
Technological Roots
- Nuclear Weapons: The bomb was the “first fully existential tech,” enabling rapid and complete destruction. (15:25-15:38, 18:26-18:37)
- Industrial Tech: The combination of industrial technology, a linear materials economy, and exponential economic growth are the main drivers of planetary boundaries being crossed. (15:50-17:09)
- Information Tech: The global information network enables the rapid spread of ideas and also memes (17:16-17:39)
Key Factors
- Increased Fragility: Global dependence on interconnected systems heightens vulnerability. (18:04-18:20)
- Proliferation of Catastrophic Tech: Multiple actors with dangerous technologies make control harder. (23:14-24:52)
- Interconnected Risks: Various risks, like climate change, resource wars, and AI, tend to cascade and exacerbate each other. (26:34-26:57)
- Post WWII Paradox: While averting nuclear war, the post-WWII model inadvertently increased global fragility and planetary boundaries. (22:55-23:14)
3. Capitalism as a Misaligned Superintelligence
- Current System as Agent: The global capitalist system is presented as a “general auto poetic super intelligence” that is already misaligned. (47:23-48:02)
- Objective Function: Its goal is “to convert as much of the world, people’s creativity, ideas, labor, natural resources, everything into capital.” (47:48-48:02). The “paper clips” are the capital itself. (48:02)
- Misalignment: This system maximizes a narrow value metric (capital) at the expense of wider value metrics. It is analogous to a paperclip maximizer, pursuing profit without considering the broader well-being. (51:19-51:57, 56:25-57:27)
- Cybernetic Nature: The system operates with feedback and feed-forward loops and self-regulates, much like a cybernetic system (55:02-55:30)
- Decentralized but Powerful: It uses all human general intelligences to pursue its objective function. (50:13-51:13)
4. AI as an Accelerator of the Meta-Crisis
AI’s Unique Power
Unlike other technologies, AI can improve and optimize all existing technologies, and its applications cut across a variety of domains. “AI gives us better all of them.” (1:03:10-1:03:15)
Dual/Omni Use
All technology is dual use, and “omni-use” - AI will be used for all purposes that those with incentives to do so have the capacity to use it for. (1:06:27-1:06:41)
Risks
- Scaling Existing Problems: AI can scale up existing issues, such as carbon emissions and environmental damage. (1:08:42-1:10:07)
- Increased Info Complexity: AI creates “inscrutable matrices” that become black boxes, where control and regulation become increasingly challenging. (1:10:07-1:11:08)
- Acceleration of Moloch: AI is being developed within the existing misaligned system, which accelerates negative-sum dynamics. (1:15:32-1:15:39)
- Pre AGI risks: The risks that the world should be concerned about are the risks of AI that exist currently without reaching AGI. AI is “accelerating the topology that is already in place.”(1:01:09-1:01:15)
- The Paperclip Maximizer analogy: “The misaligned AGI as a thought experiment helps people understand moloch, but what the reality of moloch helps people understand that without getting to a total AGI that the nature of the risk there is already happening.”(1:01:23-1:01:36)
5. The Need for a New Attractor: Beyond Catastrophe and Dystopia
- Existing Attractors: The current system seems to be trending towards either catastrophic breakdown or a dystopia where risks are controlled by surveillance and authoritarian measures. (33:20-33:32)
- A Third Way: There is an urgent need for a third attractor, a future that is neither catastrophic nor dystopic. (33:51-33:57) This future needs checks and balances to prevent unchecked power. (34:03-34:14)
- Shifting Incentives: The current system, designed by Moloch, does not prioritize this third option, so the incentives need to be reevaluated. (1:15:32-1:15:39)
- Positive Sum Games (Omnia): The aim is to orient AI development toward positive-sum outcomes, moving away from the current negative-sum dynamics. (1:15:52-1:16:23)
6. Call to Action and Key Points
- Risk-Focused Approach: Technologists must engage with risk arguments, prioritize risk analysis over solely opportunity advancement, and attach risk analysis to governance. (1:29:31-1:30:35)
- International Cooperation: There needs to be more international cooperation and a multi-stakeholder approach to governing AI. (1:27:28-1:28:39)
- Re-evaluate Objectives: Companies must re-evaluate their objective functions and consider broader, longer-term values rather than solely profit-driven short-term metrics. (56:25-57:27, 1:19:20-1:20:57)
- Precautionary Principle: Given the irreversibility and high-speed nature of these processes, a precautionary principle must guide AI development. (1:24:22-1:24:33)
- The Problem of “Good” Metrics: There needs to be an agreement on a framework for defining the “good reward circuits” that society should be aiming for with technology.(1:20:57-1:21:22)