This is taken to be an important analogy for social cooperation. In the context of international relations, this model has been used to describe preferences of actors when deciding to enter an arms treaty or not. In this example, each player has a dominantstrategy. Together, the likelihood of winning and the likelihood of lagging = 1. }}F:,EdSr This iterated structure creates an incentive to cooperate; cheating in the first round significantly reduces the likelihood that the other player will trust one enough to attempt to cooperate in the future. Table 4. However, anyone who hunts rabbit can do sosuccessfullyby themselves, but with a smaller meal. Here, we assume that the harm of an AI-related catastrophe would be evenly distributed amongst actors. In short, the theory suggests that the variables that affect the payoff structure of cooperating or defecting from an AI Coordination Regime determine which model of coordination we see arise between the two actors (modeled after normal-form game setups). In this book, you will make an introduction to realism, liberalism and economic structuralism as major traditions in the field, their historical evolution and some theories they have given birth . What is the difference between ethnic cleansing and genocide? hunting stag is successful only if both hunters hunt stag, while each hunter can catch a less valuable hare on his own. It is the goal this paper to shed some light on these, particularly how the structure of preferences that result from states understandings of the benefits and harms of AI development lead to varying prospects for coordination. 1 The metaphors that populate game theory modelsimages such as prisoners . Each player must choose an action without knowing the choice of the other. \wb94W(F}pYY"[17/x(K+jf+M)S_3ZP7~Nj\TgTId=/o7Mx{a[ K} If, by contrast, the prospect of a return to anarchy looms, trust erodes and short-sighted self-interest wins the day. [51] An analogous scenario in the context of the AI Coordination Problem could be if both international actors have developed, but not yet unleashed an ASI, where knowledge of whether the technology will be beneficial or harmful is still uncertain. <>stream > This may not amount to a recipe for good governance, but it has meant the preservation of a credible bulwark against state collapse. This is visually represented in Table 3 with each actors preference order explicitly outlined. Jean-Jacques Rousseau described a situation in which two individuals go out on a hunt. (1) the responsibility of the state to protect its own population from genocide, war crimes, ethnic cleansing and crimes against humanity, and from their incitement; What is the difference between structural and operational conflict prevention? Examples of the stag hunt [ edit] The original stag hunt dilemma is as follows: a group of hunters have tracked a large stag, and found it to follow a certain path. [27] An academic survey conducted showed that AI experts and researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years. The dynamics changes once the players learn with whom to interact with. Altogether, the considerations discussed are displayed in Table 6 as a payoff matrix. But the moral is not quite so bleak. HV6am`vjyJ%K>{:kK$C$$EedI3OilJZT$h_'eN. They suggest that new weapons (or systems) that derive from radical technological breakthroughs can render a first strike more attractive, whereas basic arms buildups provide deterrence against a first strike. I refer to this as the AI Coordination Problem. Additionally, this model accounts for an AI Coordination Regime that might result in variable distribution of benefits for each actor. [47] look at different policy responses to arms race de-escalation and find that the model or game that underlies an arms race can affect the success of policies or strategies to mitigate or end the race. It is not clear whether the errors were deliberate or accidental. This distribution variable is expressed in the model as d, where differing effects of distribution are expressed for Actors A and B as dA and dB respectively.[54]. Downs et al. [25] For more on the existential risks of Superintelligence, see Bostrom (2014) at Chapters 6 and 8. In addition to the example suggested by Rousseau, David Hume provides a series of examples that are stag hunts. If one side cooperates with and one side defects from the AI Coordination Regime, we can expect their payoffs to be expressed as follows (here we assume Actor A defects while Actor B cooperates): For the defector (here, Actor A), the benefit from an AI Coordination Regime consists of the probability that they believe such a regime would achieve a beneficial AI times Actor As perceived benefit of receiving AI with distributional considerations [P_(b|A) (AB)b_Ad_A]. The prototypical example of a PGG is captured by the so-called NPD. Although Section 2 describes to some capacity that this might be a likely event with the U.S. and China, it is still conceivable that an additional international actor can move into the fray and complicate coordination efforts. Moreover, the usefulness of this model requires accurately gauging or forecasting variables that are hard to work with. The second player, or nation in this case, has the same option. We find that individuals under the time pressure treatment are more likely to play stag (vs. hare) than individuals in the control group: under time constraints 62.85% of players are stag -hunters . 0000002169 00000 n [46] Charles Glaser, Realists as Optimists: Cooperation as Self-Help, International Security 19, 3(1994): 50-90. However, both hunters know the only way to successfully hunt a stag is with the other's help. [10] AI expert Andrew Ng says AI is the new electricity | Disrupt SF 2017, TechCrunch Disrupt SF 2017, TechCrunch, September 20, 2017, https://www.youtube.com/watch?v=uSCka8vXaJc. Table 2. [44] Thomas C. Schelling & Morton H. Halperin, Strategy and Arms Control. Payoff variables for simulated Chicken game. This essay first appeared in the Acheson Prize 2018 Issue of the Yale Review of International Studies. A relevant strategy to this insight would be to focus strategic resources on shifting public or elite opinion to recognize the catastrophic risks of AI. If security increases cant be distinguished as purely defensive, this decreases instability. In times of stress, individual unicellular protists will aggregate to form one large body. Despite the damage it could cause, the impulse to go it alone has never been far off, given the profound uncertainties that define the politics of any war-torn country. In so doing, they have maintained a kind of limited access order, drawing material and political benefits from cooperating with one another, most recently as part of the current National Unity Government. [18] Deena Zaidi, The 3 most valuable applications of AI in health care, VentureBeat, April 22, 2018, https://venturebeat.com/2018/04/22/the-3-most-valuable-applications-of-ai-in-health-care/. International Relations, arguing that territorial conflicts in international relations follow a strategic logic but one defined by the cost-benefit calculations that . [50] This is visually represented in Table 3 with each actors preference order explicitly outlined. [4] In international law, countries are the participants in a stag hunt. Let us call a stag hunt game where this condition is met a stag hunt dilemma. As will hold for the following tables, the most preferred outcome is indicated with a 4, and the least preferred outcome is indicated with a 1., Actor As preference order: DC > CC > DD > CD, Actor Bs preference order: CD > CC > DD > DC. This article is about the game theory problem about stag hunting. A classic game theoretic allegory best demonstrates the various incentives at stake for the United States and Afghan political elites at this moment. 695 0 obj [13] Tesla Inc., Autopilot, https://www.tesla.com/autopilot. The payoff matrix in Figure 1 illustrates a generic stag hunt, where In game theory, the stag hunt, sometimes referred to as the assurance game, trust dilemma or common interest game, describes a conflict between safety and social cooperation. [8] Elsa Kania, Beyond CFIUS: The Strategic Challenge of Chinas Rise in Artificial Intelligence, Lawfare, June 20, 2017, https://www.lawfareblog.com/beyond-cfius-strategic-challenge-chinas-rise-artificial-intelligence (highlighting legislation considered that would limit Chinese investments in U.S. artificial intelligence companies and other emerging technologies considered crucial to U.S. national security interests). These remain real temptations for a political elite that has survived decades of war by making deals based on short time horizons and low expectations for peace. If both choose to leave the hedge it will grow tall and bushy but neither will be wasting money on the services of a gardener. [25] In a particularly telling quote, Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek foreshadow this stark risk: One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. The matrix above provides one example. The story is briey told by Rousseau, in A Discourse on Inequality: "If it was a matter of hunting a deer, everyone well realized that he must remain faithful to his post; but if a hare happened to pass within reach [49] For example, by defecting from an arms-reduction treaty to develop more weapons, an actor can gain the upper hand on an opponent who decides to uphold the treaty by covertly continuing or increasing arms production. Different social/cultural systems are prone to clash. 0000006962 00000 n [14] IBM, Deep Blue, Icons of Progress, http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/. Finally, in the game of chicken, two sides race to collision in the hopes that the other swerves from the path first. [9] That is, the extent to which competitors prioritize speed of development over safety (Bostrom 2014: 767). The original stag hunt dilemma is as follows: a group of hunters have tracked a large stag, and found it to follow a certain path. This subsection looks at the four predominant models that describe the situation two international actors might find themselves in when considering cooperation in developing AI, where research and development is costly and its outcome is uncertain. b For example, it is unlikely that even the actor themselves will be able to effectively quantify their perception of capacity, riskiness, magnitude of risk, or magnitude of benefits. (required), 2023 Cornell University Powered by Edublogs Campus and running on blogs.cornell.edu, The Stag Hunt Theory and the Formation Social of Contracts, http://www.socsci.uci.edu/~bskyrms/bio/papers/StagHunt.pdf. As such, it will be useful to consider each model using a traditional normal-form game setup as seen in Table 1. The corresponding payoff matrix is displayed as Table 14. In this section, I outline my theory to better understand the dynamics of the AI Coordination Problem between two opposing international actors. For example, Stag Hunts are likely to occur when the perceived harm of developing a harmful AI is significantly greater than the perceived benefit that comes from a beneficial AI . The paper proceeds as follows. What is the difference between 'negative' and 'positive' peace? [3] While (Hare, Hare) remains a Nash equilibrium, it is no longer risk dominant. Payoff matrix for simulated Deadlock. As a result, it is conceivable that international actors might agree to certain limitations or cooperative regimes to reduce insecurity and stabilize the balance of power. In recent times, more doctrinal exceptions to Article 2(4) such as anticipatory self defence (especially after the events of 9/11) and humanitarian intervention. THE STAG HUNT THE STAG HUNT T HE Stag Hunt is a story that became a game. A great example of chicken in IR is the Cuban Missile Crisis. Within the arms race literature, scholars have distinguished between types of arms races depending on the nature of arming. In the event that both actors are in a Stag Hunt, all efforts should be made to pursue negotiations and persuade rivals of peaceful intent before the window of opportunity closes. We can see through studying the Stag Hunt game theory that, even though we are selfish, we still are ironically aiming to for mutual benefit, and thus we tend to follow a such a social contract. endstream endobj 1 0 obj <> endobj 2 0 obj [/PDF/Text] endobj 3 0 obj <> endobj 8 0 obj <> endobj 9 0 obj <>stream Table 5. SECURITY CLASSIFICATION OF REPORT Unclassified 18. [1] Kelly Song, Jack Ma: Artificial intelligence could set off WWIII, but humans will win, CNBC, June 21, 2017, https://www.cnbc.com/2017/06/21/jack-ma-artificial-intelligence-could-set-off-a-third-world-war-but-humans-will-win.html. Two hunters can either jointly hunt a stag (an adult deer and rather large meal) or individually hunt a rabbit (tasty, but substantially less filling). At key moments, the cooperation among Afghan politicians has been maintained with a persuasive nudge from U.S. diplomats. However, a hare is seen by all hunters moving along the path. If both sides cooperate in an AI Coordination Regime, we can expect their payoffs to be expressed as follows: The benefit that each actor can expect to receive from an AI Coordination Regime consists of the probability that each actor believes such a regime would achieve a beneficial AI expressed as P_(b|A) (AB)for Actor As belief and P_(b|B) (AB)for Actor B times each actors perceived benefit of AI expressed as bA and bB. Like the hunters in the woods, Afghanistans political elites have a great deal, at least theoretically, to gain from sticking together. Some have accused rivals of being Taliban sympathizers while others have condemned their counterparts for being against peace. Put another way, the development of AI under international racing dynamics could be compared to two countries racing to finish a nuclear bomb if the actual development of the bomb (and not just its use) could result in unintended, catastrophic consequences. [8] If truly present, a racing dynamic[9] between these two actors is a cause for alarm and should inspire strategies to develop an AI Coordination Regime between these two actors. In Exercises 252525 through 323232, f(x)f(x)f(x) is a probability density function for a particular random variable XXX. An individual can get a hare by himself, but a hare is worth less than a stag. This table contains a sample ordinal representation of a payoff matrix for a Stag Hunt game. I will apply them to IR and give an example for each. As a result, there is no conflict between self-interest and mutual benefit, and the dominant strategy of both actors would be to defect. 2020 Yale International Relations Association | New Haven, CT, https://www.technologyreview.com/s/610026/algorithms-are-making-american-inequality-worse/, https://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf, Preparing for the Future of Artificial Intelligence, Artificial Intelligence, Automation, and the Economy, https://www.nytimes.com/2017/07/20/business/china-artificial-intelligence.html, Interview with YPG volunteer soldier Brace Belden, Shaping Saddam: How the Media Mythologized A Monster Honorable Mention, Probability Actor A believes it will develop a beneficial AI, Probability Actor B believes Actor A will develop a beneficial AI, Probability Actor A believes Actor B will develop a beneficial AI, Probability Actor B believes it will develop a beneficial AI, Probability Actor A believes AI Coordination Regime will develop a beneficial AI, Probability Actor B believes AI Coordination Regime will develop a beneficial AI, Percent of benefits Actor A can expect to receive from an AI Coordination Regime, Percent of benefits Actor B can expect to receive from an AI Coordination Regime, Actor As perceived utility from developing beneficial AI, Actor Bs perceived utility from developing beneficial AI, Probability Actor A believes it will develop a harmful AI, Probability Actor B believes Actor A will develop a harmful AI, Probability Actor A believes Actor B will develop a harmful AI, Probability Actor B believes it will develop a harmful AI, Probability Actor A believes AI Coordination Regime will develop a harmful AI, Probability Actor B believes AI Coordination Regime will develop a harmful AI, Actor As perceived harm from developing a harmful AI, Actor Bs perceived harm from developing a harmful AI. If all the hunters work together, they can kill the stag and all eat. 0000016501 00000 n Your application of the Prisoners Dilemma (PD) game to international trade agreements raises a few very interesting and important questions for the application of game theory to real-life strategic situations. Meanwhile, the harm that each actor can expect to receive from an AI Coordination Regime consists of the actors perceived likelihood that such a regime would create a harmful AI expressed as P_(h|A) (AB)for Actor A and P_(h|B) (AB)for Actor B times each actors perceived harm expressed as hA and hB. Nations are able to communicate with each other freely, something that is forbidden in the traditional PD game. [28] Armstrong et al., Racing to the precipice: a model of artificial intelligence development.. The ultimate resolution of the war in Afghanistan will involve a complex set of interlocking bargains, and the presence of U.S. forces represents a key political instrument in those negotiations. In short, the theory suggests the variables that affect the payoff structure of cooperating or defecting from an AI Coordination Regime that determine which model of coordination we see arise between the two actors (modeled after normal-form game setups). Some observers argue that a precipitous American retreat will leave the countryand even the capital, Kabulvulnerable to an emboldened, undeterred Taliban given the limited capabilities of Afghanistans national security forces. This democratic peace proposition not only challenges the validity of other political systems (i.e., fascism, communism, authoritarianism, totalitarianism), but also the prevailing realist account of international relations, which emphasises balance-of-power calculations and common strategic interests in order to explain the peace and stability that characterises relations between liberal democracies. Additionally, the defector can expect to receive the additional expected benefit of defecting and covertly pursuing AI development outside of the Coordination Regime. One example payoff structure that results in a Chicken game is outlined in Table 11. Based on the values that each actor assigns to their payoff variables, we can expect different coordination models (Prisoners Dilemma, Chicken, Deadlock, or Stag Hunt) to arise. Course blog for INFO 2040/CS 2850/Econ 2040/SOC 2090, Link: http://www.socsci.uci.edu/~bskyrms/bio/papers/StagHunt.pdf. [13] And impressive victories over humans in chess by AI programs[14] are being dwarfed by AIs ability to compete with and beat humans at exponentially more difficult strategic endeavors like the games of Go[15] and StarCraft. [29] There is a scenario where a private actor might develop AI in secret from the government, but this is unlikely to be the case as government surveillance capabilities improve. To reiterate, the primary function of this theory is to lay out a structure for identifying what game models best represent the AI Coordination Problem, and as a result, what strategies should be applied to encourage coordination and stability. The coincident timing of high-profile talks with a leaked report that President Trump seeks to reduce troop levels by half has already triggered a political frenzy in Kabul. %PDF-1.3 % [39] D. S. Sorenson, Modeling the Nuclear Arms Race: A Search for Stability, Journal of Peace Science 4 (1980): 16985. The following subsection further examines these relationships and simulates scenarios in which each coordination model would be most likely. It is also the case that some human interactions that seem like prisoner's dilemmas may in fact be stag hunts. [21] Moreover, racist algorithms[22] and lethal autonomous weapons systems[23] force us to grapple with difficult ethical questions as we apply AI to more society realms. This same dynamic could hold true in the development of an AI Coordination Regime, where actors can decide whether to abide by the Coordination Regime or find a way to cheat. [32] Paul Mozur, Beijing Wants A.I. Schelling and Halperin[44] offer a broad definition of arms control as all forms of military cooperation between potential enemies in the interest of reducing the likelihood of war, its scope and violence if it occurs, and the political and economic costs of being prepared for it.. Members of the Afghan political elite have long found themselves facing a similar trade-off. Learn how and when to remove these template messages, Learn how and when to remove this template message, "Uses of Game Theory in International Relations", "On Adaptive Emergence of Trust Behavior in the Game of Stag Hunt", "Stag Hunt: Anti-Corruption Disclosures Concerning Natural Resources", https://en.wikipedia.org/w/index.php?title=Stag_hunt&oldid=1137589086, Articles that may contain original research from November 2018, All articles that may contain original research, Articles needing additional references from November 2018, All articles needing additional references, Wikipedia articles that are too technical from July 2018, Articles with multiple maintenance issues, Creative Commons Attribution-ShareAlike License 3.0, This page was last edited on 5 February 2023, at 12:51. A major terrorist attack launched from Afghanistan would represent a kind of equal opportunity disaster and should make a commitment to establishing and preserving a capable state of ultimate value to all involved. If, by contrast, each hunter patiently keeps his or her post, everyone will be rewarded with a lavish feast. Name four key thinkers of the theory of non-violent resistance, Gandhi, martin luther king, malcon X, cesar chavex. Evaluate this statement. The story is briefly told by Rousseau, in A Discourse on Inequality : If it was a matter of hunting a deer, everyone well realized that he must remain faithful to his post; but if a hare happened to pass within reach of one of them, we cannot doubt that he would h ave gone off in pursuit . What is the so-called 'holy trinity' of peacekeeping? [16] Google DeepMind, DeepMind and Blizzard open StarCraft II as an AI research environment, https://deepmind.com/blog/deepmind-and-blizzard-open-starcraft-ii-ai-research-environment/. Here, both actors demonstrate a high degree of optimism in both their and their opponents ability to develop a beneficial AI, while this likelihood would only be slightly greater under a cooperation regime. I refer to this as the AI Coordination Problem. As discussed, there are both great benefits and harms to developing AI, and due to the relevance AI development has to national security, it is likely that governments will take over this development (specifically the US and China). Read the following questions. 0000003954 00000 n [20] Will Knight, Could AI Solve the Worlds Biggest Problems? MIT Technology Review, January 12, 2016, https://www.technologyreview.com/s/545416/could-ai-solve-the-worlds-biggest-problems/. In biology many circumstances that have been described as prisoner's dilemma might also be interpreted as a stag hunt, depending on how fitness is calculated. For example, Stag Hunts are likely to occur when the perceived harm of developing a harmful AI is significantly greater than the perceived benefit that comes from a beneficial AI. These talks involve a wide range of Afghanistans political elites, many of whom are often painted as a motley crew of corrupt warlords engaged in tribalized opportunism at the expense of a capable government and their own countrymen. As of 2017, there were 193 member-states of the international system as recognized by the United Nations. Not wanting to miss out on the high geopolitical drama, Moscow invited Afghanistans former president, Hamid Karzai, and a cohort of powerful elitesamong them rivals of the current presidentto sit down with a Taliban delegation last week. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends now whether it can be controlled at all.[26]. Here if they all act together they can successfully reproduce, but success depends on the cooperation of many individual protozoa. In these abstractions, we assume two utility-maximizing actors with perfect information about each others preferences and behaviors. Nonetheless many would call this game a stag hunt. This can be facilitated, for example, by a state leader publicly and dramatically expressing understanding of danger and willingness to negotiate with other states to achieve this. This situation is often seen as a useful analogy for many kinds of social cooperation, such as international agreements on climate change.[1]. An example of norm enforcement provided by Axelrod (1986: 1100) is of a man hit in the face with a bottle for failing to support a lynching in the Jim Crow South. One example payoff structure that results in a Deadlock is outlined in Table 9. (Pergamon Press: 1985). The stag hunt problem originated with philosopher Jean-Jacques Rousseau in his Discourse on Inequality. Orcas cooperatively corral large schools of fish to the surface and stun them by hitting them with their tails. [56] Downs et al., Arms Races and Cooperation., [57] This is additionally explored in Jervis, Cooperation Under the Security Dilemma.. In game theory, the stag hunt is a game that describes a conflict between safety and social cooperation. [52] In the context of developing an AI Coordination Regime, recognizing that two competing actors are in a state of Deadlock might drive peace-maximizing individuals to pursue de-escalation strategies that differ from other game models. There are three levels - the man, the structure of the state and the international system. See Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, & Owain Evans, When Will AI Exceed Human Performance? The stag is the reason the United States and its NATO allies grew concerned with Afghanistan's internal political affairs in the first place, and they remain invested in preventing networks, such as al-Qaeda and the Islamic State, from employing Afghan territory as a base.

Kosovo Designer Dresses, Baltimore Booze Cruise, Houses For Rent East Helena, Mt, West Allis Flea Market, Articles S