# Perfect Bayesian equilibrium

Perfect Bayesian Equilibrium
A solution concept in game theory
Relationship
Subset ofBayesian Nash equilibrium
Significance
Proposed byCho and Kreps[citation needed]
Used forDynamic Bayesian games
Examplesignaling game

In game theory, a Perfect Bayesian Equilibrium (PBE) is an equilibrium concept relevant for dynamic games with incomplete information (sequential Bayesian games). It is a refinement of Bayesian Nash equilibrium (BNE). A PBE has two components - strategies and beliefs:

• The strategy of a player in given information-set determines how this player acts in that information-set. The action may depend on the history. This is similar to a sequential game.
• The belief of a player in a given information-set determines what node in that information-set the player believes that they are playing at. The belief may be a probability distribution over the nodes in the information-set (particularly: the belief may be a probability distribution over the possible types of the other players). Formally, a belief system is an assignment of probabilities to every node in the game such that the sum of probabilities in any information set is 1.

The strategies and beliefs should satisfy the following conditions:

• Sequential rationality: each strategy should be optimal in expectation, given the beliefs.
• Consistency: each belief should be updated according to the strategies and Bayes' rule, in every path of positive probability (in paths of zero probability, aka off-the-equilibrium paths, the beliefs can be arbitrary).

A PBE is always an NE but may not be an subgame perfect equilibrium (SPE).

## PBE in signaling games

A signaling game is the simplest kind of a dynamic Bayesian game. There are two players, one of them (the "receiver") has only one possible type, and the other (the "sender") has several possible types. The sender plays first, then the receiver.

To calculate a PBE in a signaling game, we consider two kinds of equilibria: a separating equilibrium and a pooling equilibrium. In a separating equilibrium each sender-type plays a different action, so the sender's action gives information to the receiver; in a pooling equilibrium, all sender-types play the same action, so the sender's action gives no information to the receiver.

Consider the following game:[1]

• The sender has two possible types: either a "friend" (with apriori probability ${\displaystyle p}$) or an "enemy" (with apriori probability ${\displaystyle 1-p}$). Each type has two strategies: either give a gift, or not give.
• The receiver has only one type, and two strategies: either accept the gift, or reject it.
• The sender's utility is 1 if their gift is accepted, -1 if their gift is rejected, and 0 if they do not give any gift.
• If the sender is a friend, then the receiver's utility is 1 (if they accept) or 0 (if they reject).
• If the sender is an enemy, then the receiver's utility is -1 (if they accept) or 0 (if they reject).

To analyze PBE in this game, let's look first at the following potential separating equilibria:

1. The sender's strategy is: a friend gives and an enemy does not give. The receiver's beliefs are updated accordingly: if they receive a gift, they know the sender is a friend; otherwise, they know the sender is an enemy. So receiver's strategy is: accept. This is NOT an equilibrium, since the sender's strategy is not optimal: an enemy sender can increase their payoff from 0 to 1 by sending a gift.
2. The sender's strategy is: a friend does not give and an enemy gives. The receiver's beliefs are updated accordingly: if they receive a gift, they know the sender is an enemy; otherwise, they know the sender is a friend. The receiver's strategy is: reject. Again, this is NOT an equilibrium, since the sender's strategy is not optimal: an enemy sender can increase their payoff from -1 to 0 by not sending a gift.

We conclude that in this game, there is no separating equilibrium.

Now, let's look at the following potential pooling equilibria:

1. The sender's strategy is: always give. The receiver's beliefs are not updated: they still believe in the a-priori probability, that the sender is a friend with probability ${\displaystyle p}$ and an enemy with probability ${\displaystyle 1-p}$. Their payoff from accepting is ${\displaystyle 2p-1}$, so they accept if-and-only-if ${\displaystyle p\geq 1/2}$. So this is a PBE (best-response for both sender and receiver) if-and-only-if the apriori probability for being a friend satisfies ${\displaystyle p\geq 1/2}$.
2. The sender's strategy is: never give. Here, the receiver's beliefs when receiving a gift can be arbitrary, since receiving a gift is an event with probability 0, so Bayes' rule does not apply. For example, suppose the receiver's beliefs when receiving a gift is that the sender is a friend with probability 0.2 (or any other number less than 0.5). The receiver's strategy is: reject. This is a PBE regardless of the apriori probability. Both the sender and the receiver get expected payoff 0, and neither of them can improve the expected payoff by deviating.

To summarize:

• If ${\displaystyle p\geq 1/2}$, then there are two PBEs: either the sender always gives and the receiver always accepts, or the sender always does not give and the receiver always rejects.
• If ${\displaystyle p<1/2}$, then there is only one PBE: the sender always does not give and the receiver always rejects. This PBE is not Pareto efficient, but this is inevitable, since the sender cannot reliably signal their type.

In the following example, the set of PBEs is strictly smaller than the set of SPEs and BNEs. It is a variant of the above gift-game, with the following change to the receiver's utility:

• If the sender is a friend, then the receiver's utility is 1 (if they accept) or 0 (if they reject).
• If the sender is an enemy, then the receiver's utility is 0 (if they accept) or -1 (if they reject).

Note that in this variant, accepting is a dominant strategy for the receiver.

Similarly to example 1, there is no separating equilibrium. Let's look at the following potential pooling equilibria:

1. The sender's strategy is: always give. The receiver's beliefs are not updated: they still believe in the a-priori probability, that the sender is a friend with probability ${\displaystyle p}$ and an enemy with probability ${\displaystyle 1-p}$. Their payoff from accepting is always higher than from rejecting, so they accept (regardless of the value of ${\displaystyle p}$). This is a PBE - it is a best-response for both sender and receiver.
2. The sender's strategy is: never give. Suppose the receiver's beliefs when receiving a gift is that the sender is a friend with probability ${\displaystyle q}$, where ${\displaystyle q}$ is any number in ${\displaystyle [0,1]}$. Regardless of ${\displaystyle q}$, the receiver's optimal strategy is: accept. This is NOT a PBE, since the sender can improve their payoff from 0 to 1 by giving a gift.
3. The sender's strategy is: never give, and the receiver's strategy is: reject. This is NOT a PBE, since for any belief of the receiver, rejecting is not a best-response.

Note that option 3 is a Nash equilibrium! If we ignore beliefs, then rejecting can be considered a best-response for the receiver, since it does not affect their payoff (since there is no gift anyway). Moreover, option 3 is even a SPE, since the only subgame here is the entire game! Such implausible equilibria might arise also in games with complete information, but they may be eliminated by applying subgame perfect Nash equilibrium. However, Bayesian games often contain non-singleton information sets and since subgames must contain complete information sets, sometimes there is only one subgame—the entire game—and so every Nash equilibrium is trivially subgame perfect. Even if a game does have more than one subgame, the inability of subgame perfection to cut through information sets can result in implausible equilibria not being eliminated.

To summarize: in this variant of the gift game, there are two SPEs: either the sender always gives and the receiver always accepts, or the sender always does not give and the receiver always rejects. From these, only the first one is a PBE; the other is not a PBE since it cannot be supported by any belief-system.

### More examples

For further examples, see signaling game#Examples. See also [2] for more examples.

## PBE in multi-stage games

A multi-stage game is a sequence of simultaneous games played one after the other. These games may be identical (as in repeated games) or different.

### Repeated public-good game

 Build Don't Build 1-C1, 1-C2 1-C1, 1 Don't 1, 1-C2 0,0 Public good game

The following game[3]:section 6.2 is a simple representation of the free-rider problem. There are two players, each of whom can either build a public good or not build. Each player gains 1 if the public good is built and 0 if not; in addition, if player ${\displaystyle i}$ builds the public good, they have to pay a cost of ${\displaystyle C_{i}}$. The costs are private information - each player knows their own cost but not the other's cost. It is only known that each cost is drawn independently at random from some probability distribution. This makes this game a Bayesian game.

In the one-stage game, each player builds if-and-only-if their cost is smaller than their expected gain from building. The expected gain from building is exactly 1 times the probability that the other player does NOT build. In equilibrium, for every player ${\displaystyle i}$, there is a threshold cost ${\displaystyle C_{i}^{*}}$, such that the player contributes if-and-only-if their cost is less than ${\displaystyle C_{i}^{*}}$. This threshold cost can be calculated based on the probability distribution of the players' costs. For example, if the costs are distributed uniformly on ${\displaystyle [0,2]}$, then there is a symmetric equilibrium in which the threshold cost of both players is 2/3. This means that a player whose cost is between 2/3 and 1 will not contribute, even though their cost is below the benefit, because of the possibility that the other player will contribute.

Now, suppose that this game is repeated two times.[3]:section 8.2.3 The two plays are independent, i.e., each day the players decide simultaneously whether to build a public good in that day, get a payoff of 1 if the good is built in that day, and pay their cost if they built in that day. The only connection between the games is that, by playing in the first day, the players may reveal some information about their costs, and this information might affect the play in the second day.

We are looking for a symmetric PBE. Denote by ${\displaystyle {\hat {c}}}$ the threshold cost of both players in day 1 (so in day 1, each player builds if-and-only-if their cost is at most ${\displaystyle {\hat {c}}}$). To calculate ${\displaystyle {\hat {c}}}$, we work backwards and analyze the players' actions in day 2. Their actions depend on the history (= the two actions in day 1), and there are three options:

1. In day 1, no player built. So now both players know that their opponent's cost is above ${\displaystyle {\hat {c}}}$. They update their belief accordingly, and conclude that there is a smaller chance that their opponent will build in day 2. Therefore, they increase their threshold cost, and the threshold cost in day 2 is ${\displaystyle c^{00}>{\hat {c}}}$.
2. In day 1, both players built. So now both players know that their opponent's cost is below ${\displaystyle {\hat {c}}}$. They update their belief accordingly, and conclude that there is a larger chance that their opponent will build in day 2. Therefore, they decrease their threshold cost, and the threshold cost in day 2 is ${\displaystyle c^{11}<{\hat {c}}}$.
3. In day 1, exactly one player built; suppose it is player 1. So now, it is known that the cost of player 1 is below ${\displaystyle {\hat {c}}}$ and the cost of player 2 is above ${\displaystyle {\hat {c}}}$. There is an equilibrium in which the actions in day 2 are identical to the actions in day 1 - player 1 builds and player 2 does not build.

It is possible to calculate the expected payoff of the "threshold player" (a player with cost exactly ${\displaystyle {\hat {c}}}$) in each of these situations. Since the threshold player should be indifferent between contributing and not contributing, it is possible to calculate the day-1 threshold cost ${\displaystyle {\hat {c}}}$. It turns out that this threshold is lower than ${\displaystyle c^{*}}$ - the threshold in the one-stage game. This means that, in a two-stage game, the players are less willing to build than in the one-stage game. Intuitively, the reason is that, when a player does not contribute in the first day, they make the other player believe their cost is high, and this makes the other player more willing to contribute in the second day.

### Jump-bidding

In an open-outcry English auction, the bidders can raise the current price in small steps (e.g. in \$1 each time). However, often there is jump bidding - some bidders raise the current price much more than the minimal increment. One explanation to this is that it serves as a signal to the other bidders. There is a PBE in which each bidder jumps if-and-only-if their value is above a certain threshold. See Jump bidding#signaling.