Polynomial Time Method for Solving Nash Equilibria of Zero-Sum Games
Abstract: There are a few studies that focus on solution methods for finding a Nash equilibrium of zero-sum games. We discuss the use of Karmarkar’s interior point method to solve the Nash equilibrium problems of a zero-sum game, and prove that it is theoretically a polynomial time algorithm. We implement the Karmarkar method, and a preliminary computational result shows that it performs well for zero-sum games. We also mention an affine scaling method that would help us compute Nash equilibria of general zero-sum games effectively.

1. Introduction

It is well known that John von Neumann the existence of Nash equilibria of zero-sum games in the late 1920s .

Dantzig and Thapa  mentioned the linear programming formulation of a zero-sum game, and considered reducing linear primal and dual problems to a zero-sum game.

Khachian  and Karmarkar’s  interior-point methods for solving linear programming problems in polynomial time are significant discoveries.

Khachian’s ellipsoid method  is important in that it is a first polynomial time algorithm, however, computational results with it have been disappointing.

Karmarkar’s projective scaling method  is considered to be a practical polynomial algorithm. Numerous attempts have been made to refine a variety of interior point methods in the late 1980s  . There are few studies concerning computational methods (e.g.,  ) to solve a Nash equilibrium of a zero-sum game, in spite of those developments in linear programming.

We discuss the use of the Karmarkar’s method to solve a Nash equilibrium of a zero-sum game, which is expressed as linear programming problems derived from the original zero-sum game, and prove that it is without doubt a polynomial time algorithm. We implement the Karmarkar method, and apply it for solving Rock-Paper-Scissors, of which result seems to be promising.

Finally, we also mention an affine scaling method that would help us compute a Nash equilibrium effectively.

2. Formulation

Dantzig and Thapa  mentioned that maximin strategy of Player 1 in a zero-sum game can be formulated as the following linear programming problem:

$\begin{array}{l}\underset{x,\xi }{\text{maximize}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\xi \\ \text{subjectto}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{A}^{\text{T}}x\ge \xi e\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\sum }_{i=1}^{n}{x}_{i}=1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{i}\ge 0\end{array}$ (1)

where $\xi \in ℝ,e={\left(1,\cdots ,1\right)}^{\text{T}}\in {ℝ}_{+}^{n}$, $A\in {ℝ}^{n×n}$ is a payoff matrix, and $x\in {ℝ}_{+}^{n}$ is a mixed strategy vector of Player 1.

Likewise, the minimax strategy of Player 2 can be formulated as the following linear programming problem:

$\begin{array}{l}\underset{y,\eta }{\text{minimize}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\eta \\ \text{subjectto}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}Ay\le \eta e\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\sum }_{i=1}^{n}{y}_{i}=1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{y}_{i}\ge 0\end{array}$ (2)

where $y\in {ℝ}_{+}^{n}$ is a mixed strategy of Player 2, which is, in fact, the linear programming dual problem of (1).

The following result which clarifies the relationship between (1) and (2).

Theorem 1. There exist solutions $\left\{\stackrel{¯}{x},\stackrel{¯}{\xi }\right\}$ and $\left\{\stackrel{¯}{y},\stackrel{¯}{\eta }\right\}$ for (1) and (2). Moreover,

$\stackrel{¯}{\xi }=\stackrel{¯}{\eta }$.

Proof We let $\xi ={\xi }_{1}-{\xi }_{2}$, ${\xi }_{1}\ge 0$, ${\xi }_{2}\ge 0$, and rewrite (1) as

$\begin{array}{l}\underset{x,{\xi }_{1},{\xi }_{2}}{\text{minimize}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\left({\xi }_{1}-{\xi }_{2}\right)\\ \text{subjectto}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{A}^{\text{T}}x-\left({\xi }_{1}-{\xi }_{2}\right)e\ge 0\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{e}^{\text{T}}x\ge 1\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-{e}^{\text{T}}x\ge -1\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{i}\ge 0,\text{\hspace{0.17em}}i=1,\cdots ,n,\text{\hspace{0.17em}}{\xi }_{1},{\xi }_{2}\ge 0\end{array}$ (3)

The dual linear program of (1) is

$\begin{array}{l}\underset{y,{\eta }_{1},{\eta }_{2}}{\text{maximize}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\eta }_{1}-{\eta }_{2}\\ \text{subjectto}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}Ay+\left({\eta }_{1}-{\eta }_{2}\right)e\le 0\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-{e}^{\text{T}}y\le -1\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{e}^{\text{T}}y\le 1\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{y}_{i}\ge 0,\text{\hspace{0.17em}}i=1,\cdots ,n,\text{\hspace{0.17em}}{\eta }_{1},{\eta }_{2}\ge 0\end{array}$ (4)

Note that (4) is equivalent to (2) by letting $\eta =-\left({\eta }_{1}-{\eta }_{2}\right)$.

There exists a feasible point for (3), such as ${x}^{\prime }=\frac{1}{n}e$, ${{\xi }^{\prime }}_{1}={\mathrm{min}}_{j}{\sum }_{i}{a}_{ij}$, ${{\xi }^{\prime }}_{2}=0$. We also observe that (3) has a finite solution, as $\left({\xi }_{1}-{\xi }_{2}\right)$ should be bounded because $x\in {\text{Δ}}^{n-1}$, where ${\text{Δ}}^{n-1}$ is an $n-1$ simplex. Therefore, $-\stackrel{¯}{\xi }=-\left({\stackrel{¯}{\xi }}_{1}-{\stackrel{¯}{\xi }}_{2}\right)={\stackrel{¯}{\eta }}_{1}-{\stackrel{¯}{\eta }}_{2}=-\stackrel{¯}{\eta }$ because the strong duality theorem  holds. □

The above result shows that (2) is the linear programming dual problem of (1).

Next, we establish the validity of the above formulation. Let ${S}_{x}=\left\{{s}_{1},\cdots ,{s}_{n}\right\}$ and ${S}_{y}=\left\{{s}_{1},\cdots ,{s}_{n}\right\}$ be sets of pure strategies of Players 1 and 2, respectively. Let $E\left(x,y\right)$ denote the expected payoff value for Player 1 when Player 1 takes x and Player 2 takes y as a mixed strategy, that is,

$\begin{array}{l}E\left(x,y\right)={\sum }_{i=1}^{n}{x}_{i}E\left({s}_{i},y\right)\\ ={\sum }_{i=1}^{n}{x}_{i}\left({\sum }_{j=1}^{n}{y}_{j}E\left({s}_{i},{s}_{j}\right)\right)\\ ={\sum }_{i=1}^{n}{x}_{i}\left({\sum }_{j=1}^{n}{y}_{j}{a}_{ij}\right)={x}^{\text{T}}Ay\end{array}$ (5)

Theorem 2.

$E\left(x,\stackrel{¯}{y}\right)\le E\left(\stackrel{¯}{x},\stackrel{¯}{y}\right)\le E\left(\stackrel{¯}{x},y\right)$. (6)

Proof Because $\stackrel{¯}{x}$ and $\stackrel{¯}{y}$ are the solutions for (1) and (2), respectively,

$A\stackrel{¯}{y}\le \alpha e\le {A}^{\text{T}}\stackrel{¯}{x}$,

where $\alpha =\stackrel{¯}{\xi }=\stackrel{¯}{\eta }$ from Theorem 1.

Therefore,

${x}^{\text{T}}A\stackrel{¯}{y}\le \alpha$ and $\alpha \le {y}^{\text{T}}{A}^{\text{T}}\stackrel{¯}{x}={\stackrel{¯}{x}}^{\text{T}}Ay$

namely,

$E\left(x,\stackrel{¯}{y}\right)\le E\left(\stackrel{¯}{x},\stackrel{¯}{y}\right)\le E\left(\stackrel{¯}{x},y\right)$. □

The result from Theorem 2 is known as the minimax theorem ; however, the direct derivation above would be simple and easy to comprehend.

3. Karmarkar Method

We can use the simplex method to find a Nash equilibrium from the results of Section 2.

However, the curse of dimensionality may occur if the running time required to solve a linear problem using the simplex method may increase rapidly as the number of variables increases.

Therefore, we employ a variation of Karmarkar’s method  which belongs to an interior point method and assures the polynomial time convergence property.

The Karmarkar method  we adopt deals with the following canonical form:

$\begin{array}{l}\text{minimize}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}z={c}^{\text{T}}x\\ \text{subjectto}\text{\hspace{0.17em}}\text{\hspace{0.17em}}Ax=0\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}^{\text{T}}x=1\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}x\ge 0\end{array}$ (7)

where $c,x\in {ℝ}^{n}$, $a={\left(\stackrel{m}{\stackrel{︷}{1,\cdots ,1}},0,\cdots ,0\right)}^{\text{T}}\in {ℝ}_{+}^{n}$, and $A\in {ℝ}^{n×n}$.

If ${c}^{\text{T}}\stackrel{¯}{x}=0$, where $\stackrel{¯}{x}$ is the solution of (7), we refer to (7) as the canonical form.

It should be noted that the original Karmarkar’s method  utilizes $e={\left(1,\cdots ,1\right)}^{\text{T}}\in {ℝ}_{+}^{n}$ instead of a. In this study, we utilize a because we can formulate the part of components of x for mixed strategies as ${a}^{\text{T}}x=1$ in (14) and (15) later.

The key projective transformation sends x to

$\stackrel{˜}{x}=\frac{{X}^{-1}x}{{e}^{\text{T}}{X}^{-1}x}$

where $X=\text{diag}\left({x}_{k}\right)$. We notice that $\stackrel{˜}{x}\in {\text{Δ}}^{n-1}$, where ${\text{Δ}}^{n-1}$ is $\left(n-1\right)$ -simplex. The corresponding inverse transformation is

$x=\frac{X\stackrel{˜}{x}}{{a}^{\text{T}}X\stackrel{˜}{x}}$

As seen in , the problem in (7) is transformed as

$\begin{array}{l}\text{minimize}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\stackrel{˜}{c}}^{\text{T}}\stackrel{˜}{x}\\ \text{subjectto}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\stackrel{˜}{A}\stackrel{˜}{x}=0\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{e}^{\text{T}}\stackrel{˜}{x}=1\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\stackrel{˜}{x}\ge 0\end{array}$ (8)

where $\stackrel{˜}{c}=Xc$ and $\stackrel{˜}{A}=AX$. We denote the constraint matrix using

$B=\left(\begin{array}{c}\stackrel{˜}{A}\\ {e}^{\text{T}}\end{array}\right)$,

and the corresponding orthogonal projection matrix is ${P}_{B}=I-{B}^{\text{T}}{\left(B{B}^{\text{T}}\right)}^{-1}B$.

The projected steepest-descent direction is

$\text{Δ}\stackrel{˜}{x}=-{P}_{B}\stackrel{˜}{c}=-{P}_{B}Xc=-\left(\stackrel{˜}{P}-\frac{1}{n}e{e}^{\text{T}}\right)Xc=-\stackrel{˜}{P}Xc+\frac{{c}^{\text{T}}{x}^{k}}{n}e$ (9)

At each step, the next point ${\stackrel{˜}{x}}^{k+1}$ is given by

${\stackrel{˜}{x}}^{k+1}=\frac{e}{n}+\alpha \text{Δ}\stackrel{˜}{x}$, $\alpha >0$,

more precisely,

${\stackrel{˜}{x}}^{k+1}=\frac{e}{n}+\frac{1}{\sqrt{n\left(n+1\right)}‖\text{Δ}\stackrel{˜}{x}‖}\cdot \text{Δ}\stackrel{˜}{x}$, (10)

so that ${\stackrel{˜}{x}}^{k+1}$ will be an inner point in ${\text{Δ}}^{n-1}$.

The next point ${x}^{k+1}$ in x-space is obtained by

${x}^{k+1}=\frac{X{\stackrel{˜}{x}}^{k+1}}{{a}^{\text{T}}X{\stackrel{˜}{x}}^{k+1}}$. (11)

Finally, we define the optimality criterion as

${c}^{\text{T}}{x}^{k}<{2}^{-q}{c}^{\text{T}}{x}^{0}$, (12)

where ${2}^{-q}$ is a prescribed precision.

Next, we describe the Karmarkar method as follows:

Algorithm 1.

Input: $A\in {ℤ}^{n×n}$, $c\in {ℤ}^{n}$, a feasible point ${x}^{0}$ for (7)

Output: ${x}^{*}$ such that ${c}^{\text{T}}{x}^{*}=0$, $A{x}^{*}=0$, ${a}^{\text{T}}{x}^{*}=1$, ${x}^{*}\ge 0$.

1: Compute $\text{Δ}\stackrel{˜}{x}$ by (9).

2: Determine ${\stackrel{˜}{x}}^{k+1}$ by (10).

3: Obtain ${x}^{k+1}$ using (11). Check the optimality criterion using (12).

4: $k←k+1$. Go to Step 1.

Karmarkar defined a potential function as follows:

$f\left(x\right)=n\mathrm{ln}{c}^{\text{T}}x-{\sum }_{j=1}^{n}\mathrm{ln}{x}_{j}={\sum }_{j=1}^{n}\mathrm{ln}\left(\frac{{c}^{\text{T}}x}{{x}_{j}}\right)$. (13)

He proved that Algorithm 1 generates, $\left\{{x}^{k}\right\}$, which reduces $f\left({x}^{k}\right)$ by $\gamma$ > 0 at each iteration.

In fact, the following result follows for original Karmarkar’s algorithm .

Theorem 3. Let $\left\{{x}^{k}\right\}$ be the sequence generated by Karmarkar’s algorithm  applied to (3). Then, (3) can be solved in polynomial time.

Proof We can transform (3) to original Karmarkar’s original canonical form  in pol-ynomial time in the manner of . Then the result follows since the original canonical form can be solved in at most (𝑛(𝑞 + log2 𝑛)) iterations from Theorem 1 . □

We can establish the following result.

Theorem 4. A Nash equilibrium of zero-sum games can be found in polynomial time.

Proof Nash equilibria of zero-sum games can be formulated as (1) and (2), even if the value of the game is not necessarily 0. We can rewrite (1) and (2) as (3) and (4) respectively. (3) (and (4)) can be solved in polynomial time from Theorem 3. □

We should observe that the transformation in the above proof is used only for the proof itself. When $\stackrel{¯}{\xi }\ne 0$, we can employ the dual problem scheme [9, Chapter 17.5] to update an estimated lower bound to, ${z}^{*}$, which is used to transform (3) into the canonical form (7).

4. Computational Results

In this section, we employ the Karmarkar method described in the previous section.

The zero-sum game (1) and (2) can be rewritten as:

$\begin{array}{l}\underset{x,\xi ,s}{\text{minimize}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\left({\xi }_{1}-{\xi }_{2}\right)\\ \text{subjectto}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-{A}^{\text{T}}x+\left({\xi }_{1}-{\xi }_{2}\right)e+{\left({s}_{1},\cdots ,{s}_{n}\right)}^{\text{T}}=0\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{e}^{\text{T}}x=1\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{i}\ge 0,\text{\hspace{0.17em}}i=1,\cdots ,n,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\xi }_{1},{\xi }_{2}\ge 0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{s}_{i}\ge 0,\text{\hspace{0.17em}}i=1,\cdots ,n\end{array}$ (14)

and

$\begin{array}{l}\underset{y,\eta ,t}{\text{maximize}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\eta }_{1}-{\eta }_{2}\\ \text{subjectto}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}Ay+\left({\eta }_{1}-{\eta }_{2}\right)e+{\left({t}_{1},\cdots ,{t}_{n}\right)}^{\text{T}}=0\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{e}^{\text{T}}y=1\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{y}_{i}\ge 0,\text{\hspace{0.17em}}i=1,\cdots ,n,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\eta }_{1},{\eta }_{2}\ge 0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{t}_{i}\ge 0,\text{\hspace{0.17em}}i=1,\cdots ,n\end{array}$ (15)

where $x\in {ℝ}^{n}$, $A\in {ℝ}^{n×n}$, $e={\left(1,\cdots ,1\right)}^{\text{T}}\in {ℝ}^{n}$, $y\in {ℝ}^{n}$.

These problems belong to the canonical form (7), which can be solved by Algorithm 1.

Rock-Paper-Scissors is used as a test problem of zero-sum games. Its payoff matrix for Player 1 (gain) and Player 2 (loss) is shown in Table 1.

The Nash equilibrium of Rock-Paper-Scissors is

$\left\{\stackrel{¯}{x},\stackrel{¯}{y}\right\}=\left\{\left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right),\left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)\right\}$

and the value of the game is 0.

It should be noted that (14) and (15) are exactly the same because ${A}^{\text{T}}=-A$ in this case. Thus, we have only the results of (14) to demonstrate, namely, the problem is:

$\begin{array}{l}\underset{x,\xi ,s}{\text{minimize}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\left({\xi }_{1}-{\xi }_{2}\right)\\ \text{subjectto}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-{x}_{2}+{x}_{3}+{\xi }_{1}-{\xi }_{2}+{s}_{1}=0\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{1}-{x}_{3}+{\xi }_{1}-{\xi }_{2}+{s}_{2}=0\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-{x}_{1}+{x}_{2}+{\xi }_{1}-{\xi }_{2}+{s}_{3}=0\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{1}+{x}_{2}+{x}_{3}=1\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{1},{x}_{2},{x}_{3}\ge 0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\xi }_{1},{\xi }_{2}\ge 0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}{s}_{1},{s}_{2},{s}_{3}\ge 0\end{array}$ (16)

The Karmarkar method was coded in Python, and run on a Windows PC (Core i7 8550U, RAM 16GB).

Table 2 summarizes the sequence generated by the Karmarkar method applied to (16) from ${x}^{0}=\left(0.8,0.1,0.1\right)$, ${\xi }_{1}^{0}=0.1$, ${\xi }_{2}^{0}=0.9$, ${s}^{0}=\left(0.8,0.1,1.5\right)$. We can see that the objective value decreases by half in each iteration, which with (12) validates Theorem 3.

Figure 1 shows the sequences generated by the Karmarkar method from three different starting points

Figure 1. Sequences for different starting points.

Table 1. Payoff matrix of rock-paper-scissors for player 1 (P1) and player 2 (P2).

Table 2. Details for x0 = (0.8, 0.1, 0.1).

${x}^{0}=\left(0.8,0.1,0.1\right),\left(0.1,0.8,0.1\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left(0.1,0.1,0.8\right)$.

It seems that each sequence tends linearly to the Nash equilibrium from an arbitrary interior point.

Table 2 and Figure 1 show that the Karmarkar method is effective and efficient even for small-size problems.

5. Concluding Remarks

Linear programming formulations can represent the minimax principle for zero-sum games.

We have proved that the Karmarkar’s algorithm solves the linear programming problems in polynomial time.

The Karmarkar method performs well when the value of the game is 0 . If the value of the game is not 0 , we can utilize the dual problem scheme ( , Chapter 17.5).

We can resort to an affine scaling algorithm  because, either (1) or (2) is the linear programming standard form generated by introducing slack variables, regardless of whether or not the value of the game is 0 . The affine scaling algorithm for solving equilibria of zero-sum games would be practically effective; however, there is no proof that it has a polynomial time convergence property .

Cite this paper: Tanaka, Y. and Togashi, M. (2021) Polynomial Time Method for Solving Nash Equilibria of Zero-Sum Games. American Journal of Computational Mathematics, 11, 23-30. doi: 10.4236/ajcm.2021.111002.
References

   von Neumann, J. (1928) Zur Theorie der Gesellschaftsspiele. Mathematische Annalen, 100, 295-320.
https://doi.org/10.1007/BF01448847

   Dantzig, G.B. and Thapa, M.N. (1997) Linear Programming 1: Introduction. Springer-Verlag, New York, 166.

   Khachiyan, L.G. (1979) A Polynomial Algorithm in Linear Programming. Doklady Akademii Nauk, 244, 1093-1096.

   Karmarkar, N. (1984) A New Polynomial Time Algorithm for Linear Programming. Combinatorica, 4, 373-395.
https://doi.org/10.1007/BF02579150

   Renegar, J. (1988) A Polynomial-Time Algorithm, Based on Newton’s Method, for Linear Programming. Mathematical Programming, 40, 59-93.
https://doi.org/10.1007/BF01580724

   Vanderbei, R.J., Meketon, M.S. and Freedman, B.A. (1986) A Modification of Karmarkar’s Linear Programming Algorithm. Algorithmica, 1, 395-407.
https://doi.org/10.1007/BF01840454

   Daskalakis, C., Deckelbaum, A. and Kim, A. (2015) Near-Optimal No-Regret Algorithms for Zero-Sum Games. Games and Economic Behavior, 100, 327-348.
https://doi.org/10.1016/j.geb.2014.01.003

   Nocedal, J. and Wright, S.J. (2006) Numerical Optimization. 2nd Edition, Springer, Berlin.

   Nash, S.G. and Sofer, A. (1996) Linear and Nonlinear Programming. McGraw-Hill, New York.

   Dikin, I.I. (1967) Iterative Solution of the Problems of Linear and Quadratic Programming. Soviet Mathematics Doklady, 8, 674-675.

   Tsuchiya, T. and Muramatsu, M. (1995) Global Convergence of a Long-Step Affine Scaling Algorithm for Degenerate Linear Programming Problems. SIAM Journal on Optimization, 5, 525-551.
https://doi.org/10.1137/0805027

Top