AI Players

From gdp3
Revision as of 22:46, 15 January 2011 by Staffan Björk (Talk | contribs)

Jump to: navigation, search

AI Agents that are supposed to be able to function on a level comparable to players.

All games need players. However, there are not always enough people available and willing to play to fit the demands of a game's design and in these cases supplementing people with rule-based algorithms can be a solution. These AI Players provide a flexibility for when gameplay can occur and may also be able to offer pre-defined or customizable opponents to fit the level of challenge people may wish to have.


Already the first computer game, OXO - a computer-based version of Tic-Tac-Toe developed by Alexander S. Douglas, allowed a player to compete against the program itself[1]. Today nearly all multiplayer computer games have support for replacing people with computer opponents (e.g. the Age of Empires series, the Battlefield series, the Command and Conquer series, the Left 4 Dead series, and the Tekken series). For some of these games it may be unpractical to find enough people to fill all the player slots available, so the norm in these cases is that some or the majority of the players are actually AI Players (examples of where this can occur include the Europa Universalis series, the Civilization series, and the Need for Speed series).

Not all AI players need to be controlled by computers. A 'robot', really a set of instructions that a human needed to follow, was introduced in the expansion The Gathering Storm of the card game Race for the Galaxy. This 'robot' allows a single player to player against it as if a two-player instance of the game was being played. An even earlier example of an AI Player was 'MENACE' by Donald Michie. Although not the first AI Player for Tic-Tac-Toe it could get better after each time it played and was first implemented through the use of beans and about 300 matchboxes[2]. Another solution, that only works for a small range of games where interaction between players are very limited, is to use recordings of previous players actions. The ESP Game is an example of this.

Depending on which perspective one chooses to use, programming games such as Crobots and P-Robots either only have AI Players or are zero-player games.

Using the pattern

The creation of AI Players is the design of Algorithmic Agents that can take the role of players. This means that the Algorithmic Agents need to have an Own Agenda of winning the game or playing as good as possible, and may need to display Emotional Attachment to the outcomes as a substitute for the Social Interaction that commonly occurs among people playing the same game. For games with Team Play it also means that they have to be able to work in Teams, possibly including supporting Team Combos. While Team Strategy Identification can be demanding to support (due to the need for Negotiation) this can be mitigated through the possibility to express suggestions or orders as Performative Utterances and have the AI Players have the Enforced Agent Behavior to always follows these.

One aspect of this design is to consider if the choice of using AI Players can only be done at the beginning of them game or if this can change during gameplay. The first option allows Multiplayer Games can have more players than people playing it or than they can be played as if they were Single-Player Games while avoiding problems of Game Balance and Team Balance due to changes in players. The second option allows Late Arriving Players and Drop-In/Drop-Out that give players' a Freedom of Choice in how to synchronized their own play sessions with other players (of which one effect is to support Game Pauses for players individually), but which may have problems with Game Balance and Team Balance unless they are pure Cooperation games. A special case here may be Mules, which are started by players (possibly after being created and programmed by them) to fill in for them to do Grinding (the parodical game Progress Quest can be seen as only letting a human set up a Mule and nothing else).

One less common choice for AI Players is make the game design demand that all players are AI Players. While not necessary, this is typically combined with letting players have the Creative Control to program the AI Players as is done in Crobots and P-Robots. Another uncommon choice for AI Players is to based them on Replays. Here the reason it is uncommon is probably because gameplay typically involves much interaction between the players which a recording is unlikely to be able to match up to (but see the ESP Game). Ghosts are close to this solution but clearly discernible as Replays and are thereby not perceived as players, in addition to not being able to interact with other game elements.

If several various types of AI Players are offered to players to choose from together with explanations of their difference in gaming and in skill, this can provide both Varied Gameplay and an indirect way of having Difficulty Settings. If the AI Players can change their behavior due to how well people are gaming, this is one form of Dynamic Difficulty Adjustment.

Interface Aspects

For Self-Facilitated Games, the use of AI Players typically introduces a fair level of Excise. Examples of this can be found in the robot' in Race for the Galaxy and playing Tic-Tac-Toe against 'MENACE'.

In the case of computer games, AI Players can bypass physical interfaces which can make it difficult to provide Player Balance. This is especially evident for games with Mimetic Interfaces since it is clear that the AI Players are not competing under the same conditions.


AI Players are a way to provide Agents that allow Multiplayer Games to be played as Single-Player Games or to be played with more players than people. They also allows the creation of Zero-Player Games when all players are replaced by AI Players (although this typically opens up additional interpretations about Meta Games or questioning if any gameplay occurs or not).

Although Enforced Agent Behavior can be used specifically to make them have to follow suggestions or order given by people, since they are Algorithmic Agents they always have an Enforced Agent Behavior to the rules defining them.

When AI Players are used as a way to offer Difficulty Settings, they also provide Smooth Learning Curves and supporting players in reaching Game Mastery in that players can choose ones that suit their own skill levels.

Allowing players to program AI Players for being Mules or for be part of Zero-Player Games is an example of how players can be given Creative Control. This also opens up for Evolving Rule Sets since the AI Players can be considered part of the rule set, and when players try to adjust to other players code this can spark of Red Queen Dilemmas.


Can Instantiate

Agents, Creative Control, Difficulty Settings, Dynamic Difficulty Adjustment, Drop-In/Drop-Out, Enforced Agent Behavior, Late Arriving Players, Mules, Teams, Varied Gameplay, Zero-Player Games

with Creative Control

Evolving Rule Sets, Red Queen Dilemmas

with Difficulty Settings

Game Mastery, Smooth Learning Curves

with Drop-In/Drop-Out

Freedom of Choice, Game Pauses

with Multiplayer Games

Single-Player Games

with Self-Facilitated Games


with Zero-Player Games

Meta Games

Can Modulate

Multiplayer Games

Can Be Instantiated By

Algorithmic Agents together with Own Agenda


Can Be Modulated By

Emotional Attachment, Enforced Agent Behavior

Possible Closure Effects


Potentially Conflicting With

Game Balance, Player Balance, Mimetic Interfaces, Team Balance


New pattern created in this wiki.


  1. Link to the EDSAC emulator website, which includes the code for 'OXO'.
  2. Michie, D. 'Trial and Error', in Penguin Science Survey 1961, Vol. 2.