# Expectiminimax

Class | Search algorithm |
---|---|

Worst-case performance | , where is the number of distinct dice throws |

Best-case performance | , in case all dice throws are known in advance |

Graph and tree search algorithms |
---|

Listings |

Related topics |

The **expectiminimax** algorithm is a variation of the minimax algorithm, for use in artificial intelligence systems that play two-player zero-sum games, such as backgammon, in which the outcome depends on a combination of the player's skill and chance elements such as dice rolls. In addition to "min" and "max" nodes of the traditional minimax tree, this variant has "chance" ("move by nature") nodes, which take the expected value of a random event occurring.^{[1]} In game theory terms, an expectiminimax tree is the game tree of an extensive-form game of perfect, but incomplete information.

In the traditional minimax method, the levels of the tree alternate from max to min until the depth limit of the tree has been reached. In an expectiminimax tree, the "chance" nodes are interleaved with the max and min nodes. Instead of taking the max or min of the utility values of their children, chance nodes take a weighted average, with the weight being the probability that child is reached.^{[1]}

The interleaving depends on the game. Each "turn" of the game is evaluated as a "max" node (representing the AI player's turn), a "min" node (representing a potentially-optimal opponent's turn), or a "chance" node (representing a random effect or player).^{[1]}

For example, consider a game in which each round consists of a single die throw, and then decisions made by first the AI player, and then another intelligent opponent. The order of nodes in this game would alternate between "chance", "max" and then "min".^{[1]}

## Pseudocode[edit]

The expectiminimax algorithm is a variant of the minimax algorithm and was firstly proposed by Donald Michie in 1966.^{[2]}
Its pseudocode is given below.

functionexpectiminimax(node, depth)ifnode is a terminal nodeordepth = 0returnthe heuristic value of nodeifthe adversary is to play at node // Return value of minimum-valued child nodeletα := +∞foreachchild of node α := min(α, expectiminimax(child, depth-1))else ifwe are to play at node // Return value of maximum-valued child nodeletα := -∞foreachchild of node α := max(α, expectiminimax(child, depth-1))else ifrandom event at node // Return weighted average of all child nodes' valuesletα := 0foreachchild of node α := α + (Probability[child] × expectiminimax(child, depth-1))returnα

Note that for random nodes, there must be a known probability of reaching each child. (For most games of chance, child nodes will be equally-weighted, which means the return value can simply be the average of all child values.)

## See also[edit]

## References[edit]

- ^
^{a}^{b}^{c}^{d}Stuart J. Russell; Peter Norvig (2009).*Artificial Intelligence: A Modern Approach*. Prentice Hall. pp. 177–178. ISBN 978-0-13-604259-4. **^**D. Michie (1966). Game-playing and game-learning automata. In L. Fox (ed.), Advances in Programming and Non-Numerical Computation, pp. 183-200.