Setup

I was playing Risk over the holidays and it occurred to me that I can calculate my odds of victory when attacking a territory. Sad to say, it didn't help me win, but it provided me the excuse to do some fun calculations on my graph paper while waiting for my next turn.

A quick review of the rules

Battles in risk are resolved as follows :

  1. The attacker chooses up to max(number of attacking armies, 3) dice to roll.
  2. The defender chooses up to max(number of defending armies, 2) dice to roll.
  3. The attacker and defender roll all of their dice simultaneously.
  4. The attacker and defender compare their highest rolls. Whichever is higher wins and the loser loses an army. Ties go to the defender.
  5. If both attacker and defender rolled at least 2 dice, they compare the next highest roll with the same process as 4.
  6. If there are no attacking armies left, the defender wins. If there are no defending armies left, the attacker wins and claims the territory. Else return to 1

So if the attacker rolls a 5, 3, and 2 while the defender rolls a 5 and 2 then both the attacker and defender lose one army.

Some Simplifications

I'm going to assume that everyone always rolls as many dice as possible. This is reasonable; rolling as many dice as possible seems to be the optimal strategy, but I haven't tried proving this.

I'm also assuming that the attacker cannot stop halfway through a battle. This is not reasonable; indeed the ability to make this decision is part of what keeps the game interesting. However, to incorporate this effect into our calculations requires us to model this decision-making process. I decided to sidestep this complexity.

Solution using Markov Chains

A Markov chain is the first approach that came to my mind as I've done similar calculations for games like Monopoly and Chutes and Ladders. Somebody else thought of it too; when doing a little research for this post I found this paper.

The idea is that a battle is a Markov chain with states \[ \{(A,D) : A \in \mathbb{Z}, D \in \mathbb{Z} \}\] which represent the number of armies of the attacker and defender respectively. While both attacker and defender have armies the chain progresses through the states until it eventually hits an absorbing state \((A, 0)\) in which case the attacker wins, or \((0, D)\) in which case the defender wins the battle.

All we have to do is create the transition matrix, initialize our starting state, and then calculate the stationary distribution of the chain with the power method. If you're terribly interested in the details, check out the linked paper.

I think this is the wrong approach. While a Markov chain approach is certainly a correct way to proceed, it's conceptually convoluted and cumbersome to compute both on paper and with code. It also doesn't scale very well: your number of states is \(A \times D\) and your transition matrix grows by the square of the number of states 1. You might be able to organize the transition matrix in such a way that it's easy to take powers analytically, but I'm skeptical.

Solution by Recursion

Here's a simpler approach.

First let's denote \(p_{A, D}\) to be the probability that the attacker will win a battle starting with \(A\) armies when the defender has \(D\) armies.

Let's just use the following recursion.

\[ p(A, D) = \sum_{l=0}^{A} t(l, \min(A, 3), \min(D, 2)) \times p(A - l, D - (n - l)) \]

where \(t(l, a, d)\) is the probability that the attacker will lose \(l\) armies when rolling \(a\) dice and the defender is rolling \(d\) dice. Calculating these probabilities is a a simple exercise, so I'll omit the calculations 2.

We also have the base conditions. If the defender has no armies then it's a sure win for the attacker, thus

\[ p(A, 0) = 1 \]

Likewise if the attacker has no armies then it's a sure loss.

\[ p(0, D) = 0 \]

An astute reader will notice that this is the exact same setup as a Markov chain, however we don't need to rely upon the power method here. Instead we can just keep applying the recursion until we hit an absorbing state.

Julia Implementation

Julia code implementing the recursion can be found in this gist.

The naive approach of writing a top-down recursive function will take a long time to execute even for modestly sized armies. The reason is that we're performing a large amount of duplicate work by following every potential path down the recursion. To avoid this duplication we should build up a table of results from the bottom up, starting with the base cases.

Exercises for the Reader

Expected Value

Instead of the probability that the attacker will win, it would be very helpful to know the expected number of armies the attacker will retain at the end of the battle.

How can the recursion be altered to allow for the computation of this expectation?

Distribution

We might instead be interested in the distribution of \(F_{A, D}\), the number of armies that the attacker will have at the end of the battle starting with \(A\) attacking armies and \(D\) defending armies. Notice that we can express the probability that the attacker wins as \(\Pr(F_{A, D} > 0)\) and the expected number of armies the attacker will retain as \(E[F_{A, D}]\).

How can the recursion be altered to allow for the computation of this distribution?

Footnotes:

1

If you are only calculating battles which could conceivably occur in a normal Risk game, you should be fine. If you want to know how a 1000 vs 1000 battle will turn out, you'll need another approach.

2

I've thought about it for a while and I'm not sure that there are much better approaches than brute-force enumeration. This works well for rolling up to 3 dice, but it quickly becomes computationally infeasible for more dice. This might be a good excuse for expanding my combinatorics knowledge.