# Quantal response equilibrium

Quantal response equilibrium
A solution concept in game theory
Relationship
Superset ofNash equilibrium, Logit equilibrium
Significance
Proposed byRichard McKelvey and Thomas Palfrey
Used forNon-cooperative games
ExampleTraveler's dilemma

Quantal response equilibrium (QRE) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey, it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.

In a quantal response equilibrium, players are assumed to make errors in choosing which pure strategy to play. The probability of any particular strategy being chosen is positively related to the payoff from that strategy. In other words, very costly errors are unlikely.

The equilibrium arises from the realization of beliefs. A player's payoffs are computed based on beliefs about other players' probability distribution over strategies. In equilibrium, a player's beliefs are correct.

## Application to data

When analyzing data from the play of actual games, particularly from laboratory experiments, particularly from experiments with the matching pennies game, Nash equilibrium can be unforgiving. Any non-equilibrium move can appear equally "wrong", but realistically should not be used to reject a theory. QRE allows every strategy to be played with non-zero probability, and so any data is possible (though not necessarily reasonable).

## Logit equilibrium

The most common specification for QRE is logit equilibrium (LQRE). In a logit equilibrium, player's strategies are chosen according to the probability distribution:

$P_{ij}={\frac {\exp(\lambda EU_{ij}(P_{-i}))}{\sum _{k}{\exp(\lambda EU_{ik}(P_{-i}))}}}$ $P_{ij}$ is the probability of player $i$ choosing strategy $j$ . $EU_{ij}(P_{-i}))$ is the expected utility to player $i$ of choosing strategy $j$ under the belief that other players are playing according to the probability distribution $P_{-i}$ . Note that the "belief" density in the expected payoff on the right side must match the choice density on the left side. Thus computing expectations of observable quantities such as payoff, demand, output, etc., requires finding fixed points as in mean field theory.

Of particular interest in the logit model is the non-negative parameter λ (sometimes written as 1/μ). λ can be thought of as the rationality parameter. As λ→0, players become "completely non-rational", and play each strategy with equal probability. As λ→∞, players become "perfectly rational", and play approaches a Nash equilibrium. In a non-mean-field variant of QRE, the Gibbs measure is the resulting form of the equilibrium measure, and this parameter λ is in fact the inverse of the temperature of the system which quantifies the degree of random noise in decisions.

## For dynamic games

For dynamic (extensive form) games, McKelvey and Palfrey defined agent quantal response equilibrium (AQRE). AQRE is somewhat analogous to subgame perfection. In an AQRE, each player plays with some error as in QRE. At a given decision node, the player determines the expected payoff of each action by treating their future self as an independent player with a known probability distribution over actions. As in QRE, in an AQRE every strategy is used with nonzero probability.

## Applications

The quantal response equilibrium approach has been applied in various settings. For example, Goeree et al. (2002) study overbidding in private-value auctions, Yi (2005) explores behavior in ultimatum games, Hoppe and Schmitz (2013) study the role of social preferences in principal-agent problems, and Kawagoe et al. (2018) investigate step-level public goods games with binary decisions. Vernon L. Smith and Michael J. Campbell have used a variant to model the effects of human sociability in economic interactions. There, a model correctly predicts that agents are averse to resentment and punishment, there is an asymmetry between gratitude/reward and resentment/punishment, and the diminishing of this aversion as expected payoff increases - which captures two essential properties of prospect theory. It is also shown that structural properties of the game itself can cause bounded rationality, as is evident from the concavity of figure 1 in: the effort costs of agents to gather more information to reduce noise (larger beta) is rewarded with a diminishing marginal expected payoff, which motivates agents to stop reducing noise. The purely rational Nash equilibrium is shown to have no predictive power for that model, and the boundedly rational Gibbs equilibrium must be used to predict phenomena outlined in Humanomics.

## Critiques

### Non-falsifiability

Work by Haile et al. has shown that QRE is not falsifiable in any normal form game, even with significant a priori restrictions on payoff perturbations. The authors argue that the LQRE concept can sometimes restrict the set of possible outcomes from a game, but may be insufficient to provide a powerful test of behavior without a priori restrictions on payoff perturbations.

However the authors say "this should not be mistaken for a critique of the QRE notion itself. Rather, our aim has been to clarify some limitations of examining behavior one game at a time and to develop approaches for more informative evaluation of QRE." This "non-falsifiability" is a result of showing multiple probability distributions for player strategies may be consistent with expected values from QRE, and that more conditions, such as requiring identically distributed and independent perturbations, are needed to guarantee a unique probability distribution for individual behavior such as a logit distribution. This is essentially the same as the refinement problem when multiple Nash equilibria occur.

### Loss of Information

As in statistical mechanics the mean-field approach, specifically the expectation in the exponent, results in a loss of information. More generally, differences in an agent's payoff with respect to their strategy variable result in a loss of information. This information is used by agents in, via action functions, to reward and punish other agents.