# Kelly criterion

In probability theory and intertemporal portfolio choice, the Kelly criterion, Kelly strategy, Kelly formula, or Kelly bet is a formula for bet sizing that leads almost surely to higher wealth compared to any other strategy in the long run (i.e. the limit as the number of bets goes to infinity). The Kelly bet size is found by maximizing the expected value of the logarithm of wealth, which is equivalent to maximizing the expected geometric growth rate. The Kelly Criterion is to bet a predetermined fraction of assets, and it can be counterintuitive. It was described by J. L. Kelly, Jr, a researcher at Bell Labs, in 1956. The practical use of the formula has been demonstrated.

For an even money bet, the Kelly criterion computes the wager size percentage by multiplying the percent chance to win by two, then subtracting one. So, for a bet with a 70% chance to win (or .7 probability), doubling .7 equals 1.4, from which you subtract 1, leaving .4 as your optimal wager size—40% of available funds.

In recent years, Kelly-style analysis has become a part of mainstream investment theory and the claim has been made that well-known successful investors including Warren Buffett and Bill Gross use Kelly methods. William Poundstone wrote an extensive popular account of the history of Kelly betting.

Kelly formalism is beneficial only in a restricted comparison to alternative formulas for bet sizing. Successful betting formulas are impossible, and ruin is inevitable when betting persistently. A Kelly system may take longer to approach ruin, or exponentially decline to trivial bets, compared to alternative systems.

## Example

In one study, each participant was given $25 and asked to bet on a coin that would land heads 60% of the time. Participants had 30 minutes to play, so could place about 300 bets, and the prizes were capped at$250. The behavior of the test subjects was far from optimal:

Remarkably, 28% of the participants went bust, and the average payout was just $91. Only 21% of the participants reached the maximum. 18 of the 61 participants bet everything on one toss, while two-thirds gambled on tails at some stage in the experiment. Using the Kelly criterion and based on the odds in the experiment (ignoring the cap of$250 and the finite duration of the test), the right approach would be to bet 20% of the pot on each toss of the coin (see first example below). If losing, the size of the bet gets cut; if winning, the stake increases. If the bettors had followed this rule (assuming that bets have infinite granularity and there are up to 300 coin tosses per game and that a player who reaches the cap would stop betting after that), an average of 94% of them would have reached the cap, and the average payout would be $237.36. (In this particular game, because of the cap, a strategy of betting only 12% of the pot on each toss would have even better results.) ## Statement For simple bets with two outcomes, one involving losing the entire amount bet, and the other involving winning the bet amount multiplied by the payoff odds, the Kelly bet is: $f^{*}={\frac {bp-q}{b}}={\frac {bp-(1-p)}{b}}={\frac {p(b+1)-1}{b}}$ where: • $f^{*}$ is the fraction of the current bankroll to wager, i.e. how much to bet; • $b$ is the net odds received on the wager ("$b$ to 1"), that is, you could win$b (on top of getting back the wagered $1) for a$1 bet;
• $p$  is the probability of winning;
• $q$  is the probability of losing, which is $1-p$ .

As an example, if a gamble has a 60% chance of winning ($p=0.60$ , $q=0.40$ ), and the gambler receives 1-to-1 odds on a winning bet ($b=1$ ), then the gambler should bet 20% of the bankroll at each opportunity ($f^{*}=0.20$ ), in order to maximize the long-run growth rate of the bankroll.

If the gambler has zero edge, i.e. if $b=q/p$ , then the criterion recommends for the gambler to bet nothing.

If the edge is negative ($b ) the formula gives a negative result, indicating that the gambler should take the other side of the bet. For example, in American roulette, the bettor is offered an even money payoff ($b=1$ ) on red, when there are 18 red numbers and 20 non-red numbers on the wheel ($p=18/38$ ). The Kelly bet is $-1/19$ , meaning the gambler should bet one-nineteenth of their bankroll that red will not come up. There is no explicit anti-red bet offered with comparable odds in roulette, so the best a Kelly gambler can do is bet nothing.

The top of the first fraction is the expected net winnings from a $1 bet, since the two outcomes are that you either win$$b$  with probability $p$ , or lose the $1 wagered, i.e. win$−1, with probability $q$ . Hence:

$f^{*}={\frac {\text{expected net winnings}}{\text{net winnings if you win}}}$

For even-money bets (i.e. when $b=1$ ), the first formula can be simplified to:

$f^{*}=p-q.$

Since $q=1-p$ , this simplifies further to

$f^{*}=2p-1.$

A more general problem relevant for investment decisions is the following:

1. The probability of success is $p$ .
2. If you succeed, the value of your investment increases from $1$  to $1+b$ .
3. If you fail (for which the probability is $q=1-p$ ) the value of your investment decreases from $1$  to $1-a$ . (Note that the previous description above assumes that $a$  is 1.)

In this case, as is proved in the next section, the Kelly criterion turns out to be the relatively simple expression

$f^{*}=p/a-q/b.$

Note that this reduces to the original expression for the special case above ($f^{*}=p-q$ ) for $b=a=1$ .

Clearly, in order to decide in favor of investing at least a small amount $(f^{*}>0)$ , you must have

$pb>qa.$

which obviously is nothing more than the fact that the expected profit must exceed the expected loss for the investment to make any sense.

The general result clarifies why leveraging (taking out a loan that requires paying interest in order to raise investment capital) decreases the optimal fraction to be invested, as in that case $a>1$ . Obviously, no matter how large the probability of success, $p$ , is, if $a$  is sufficiently large, the optimal fraction to invest is zero. Thus, using too much margin is not a good investment strategy when the cost of capital is high, even when the opportunity appears promising.

## Proof

Heuristic proofs of the Kelly criterion are straightforward. The Kelly criterion maximizes the expected value of the logarithm of wealth (the expectation value of a function is given by the sum, over all possible outcomes, of the probability of each particular outcome multiplied by the value of the function in the event of that outcome). We start with 1 unit of wealth and bet a fraction $f$  of that wealth on an outcome that occurs with probability $p$  and offers odds of $b$ . The probability of winning is $p$ , and in that case the resulting wealth is equal to $1+fb$ . The probability of losing is $1-p$ , and in that case the resulting wealth is equal to $1-f$ . Therefore, the expected value for log wealth $(E)$  is given by:

$E=p\log(1+fb)+(1-p)\log(1-f)$

To find the value of $f$  for which the expectation value is maximized, denoted as $f^{*}$ , we differentiate the above expression and set this equal to zero. This gives:

${\frac {dE}{df^{*}}}={\frac {pb}{1+f^{*}b}}-{\frac {1-p}{1-f^{*}}}=0$

Rearranging this equation to solve for the value of $f^{*}$  gives the Kelly criterion:

$f^{*}={\frac {pb+p-1}{b}}$

For a rigorous and general proof, see Kelly's original paper or some of the other references listed below. Some corrections have been published.

We give the following non-rigorous argument for the case with $b=1$  (a 50:50 "even money" bet) to show the general idea and provide some insights.

When $b=1$ , a Kelly bettor bets $2p-1$  times their initial wealth $W$ , as shown above. If they win, they have $2pW$  after one bet. If they lose, they have $2(1-p)W$ . Suppose they make $N$  bets like this, and win $K$  times out of this series of $N$  bets. The resulting wealth will be:

$2^{N}p^{K}(1-p)^{N-K}W\!.$

Note that the ordering of the wins and losses does not affect the resulting wealth.

Suppose another bettor bets a different amount, $(2p-1+\Delta )W$  for some value of $\Delta$  (where $\Delta$  may be positive or negative). They will have $(2p+\Delta )W$  after a win and $[2(1-p)-\Delta ]W$  after a loss. After the same series of wins and losses as the Kelly bettor, they will have:

$(2p+\Delta )^{K}[2(1-p)-\Delta ]^{N-K}W$

Take the derivative of this with respect to $\Delta$  and get:

$K(2p+\Delta )^{K-1}[2(1-p)-\Delta ]^{N-K}W-(N-K)(2p+\Delta )^{K}[2(1-p)-\Delta ]^{N-K-1}W$

The function is maximized when this derivative is equal to zero, which occurs at:

$K[2(1-p)-\Delta ]=(N-K)(2p+\Delta )$

which implies that

$\Delta =2({\frac {K}{N}}-p)$

but the proportion of winning bets will eventually converge to:

$\lim _{N\to +\infty }{\frac {K}{N}}=p$

according to the weak law of large numbers.

So in the long run, final wealth is maximized by setting $\Delta$  to zero, which means following the Kelly strategy.

This illustrates that Kelly has both a deterministic and a stochastic component. If one knows K and N and wishes to pick a constant fraction of wealth to bet each time (otherwise one could cheat and, for example, bet zero after the Kth win knowing that the rest of the bets will lose), one will end up with the most money if one bets:

$\left(2{\frac {K}{N}}-1\right)W$

each time. This is true whether $N$  is small or large. The "long run" part of Kelly is necessary because K is not known in advance, just that as $N$  gets large, $K$  will approach $pN$ . Someone who bets more than Kelly can do better if $K>pN$  for a stretch; someone who bets less than Kelly can do better if $K  for a stretch, but in the long run, Kelly always wins.

The heuristic proof for the general case proceeds as follows.[citation needed]

In a single trial, if you invest the fraction $f$  of your capital, if your strategy succeeds, your capital at the end of the trial increases by the factor $1-f+f(1+b)=1+fb$ , and, likewise, if the strategy fails, you end up having your capital decreased by the factor $1-fa$ . Thus at the end of $N$  trials (with $pN$  successes and $qN$  failures ), the starting capital of \$1 yields

$C_{N}=(1+fb)^{pN}(1-fa)^{qN}.$

Maximizing $\log(C_{N})/N$ , and consequently $C_{N}$ , with respect to $f$  leads to the desired result

$f^{*}=p/a-q/b.$

Edward O. Thorp provided a more detailed discussion of this formula for the general case. There, it can be seen that the substitution of $p$  for the ratio of the number of "successes" to the number of trials implies that the number of trials must be very large, since $p$  is defined as the limit of this ratio as the number of trials goes to infinity. In brief, betting $f^{*}$  each time will likely maximize the wealth growth rate only in the case where the number of trials is very large, and $p$  and $b$  are the same for each trial. In practice, this is a matter of playing the same game over and over, where the probability of winning and the payoff odds are always the same. In the heuristic proof above, $pN$  successes and $qN$  failures are highly likely only for very large $N$ .

## Using Python and SymPy

For a symbolic verification with Python and SymPy one would set the derivative $y'(x)$  of the expected value of the logarithmic bankroll $y(x)$  to 0 and solve for $x$ :

>>> from sympy import *
>>> x, b, p = symbols('x b p')
>>> y = p * log(1 + b * x) + (1 - p) * log(1 - x)
>>> solve(diff(y, x), x)
[-(1 - p - b * p) / b]


## Bernoulli

In a 1738 article, Daniel Bernoulli suggested that, when one has a choice of bets or investments, one should choose that with the highest geometric mean of outcomes. This is mathematically equivalent to the Kelly criterion, although the motivation is entirely different (Bernoulli wanted to resolve the St. Petersburg paradox).

An English-language translation of the Bernoulli article was not published until 1954, but the work was well-known among mathematicians and economists.

## Multiple outcomes

Kelly's criterion may be generalized on gambling on many mutually exclusive outcomes, such as in horse races. Suppose there are several mutually exclusive outcomes. The probability that the $k$ -th horse wins the race is $p_{k}$ , the total amount of bets placed on $k$ -th horse is $B_{k}$ , and

$\beta _{k}={\frac {B_{k}}{\sum _{i}B_{i}}}={\frac {1}{1+Q_{k}}},$

where $Q_{k}$  are the pay-off odds. $D=1-tt$ , is the dividend rate where $tt$  is the track take or tax, ${\frac {D}{\beta _{k}}}$  is the revenue rate after deduction of the track take when $k$ -th horse wins. The fraction of the bettor's funds to bet on $k$ -th horse is $f_{k}$ . Kelly's criterion for gambling with multiple mutually exclusive outcomes gives an algorithm for finding the optimal set $S^{o}$  of outcomes on which it is reasonable to bet and it gives explicit formula for finding the optimal fractions $f_{k}^{o}$  of bettor's wealth to be bet on the outcomes included in the optimal set $S^{o}$ . The algorithm for the optimal set of outcomes consists of four steps.

Step 1: Calculate the expected revenue rate for all possible (or only for several of the most promising) outcomes:
$er_{k}={\frac {D}{\beta _{k}}}p_{k}=D(1+Q_{k})p_{k}.$
Step 2: Reorder the outcomes so that the new sequence $er_{k}$  is non-increasing. Thus $er_{1}$  will be the best bet.
Step 3: Set $S=\varnothing$  (the empty set), $k=1$ , $R(S)=1$ . Thus the best bet $er_{k}=er_{1}$  will be considered first.
Step 4: Repeat:
If $er_{k}={\frac {D}{\beta _{k}}}p_{k}>R(S)$  then insert $k$ -th outcome into the set: $S=S\cup \{k\}$ , recalculate $R(S)$  according to the formula:
$R(S)={\frac {1-\sum _{i\in S}{p_{i}}}{1-\sum _{i\in S}{\frac {\beta _{i}}{D}}}}$  and then set $k=k+1$ ,
Otherwise, set $S^{o}=S$  and stop the repetition.

If the optimal set $S^{o}$  is empty then do not bet at all. If the set $S^{o}$  of optimal outcomes is not empty, then the optimal fraction $f_{k}^{o}$  to bet on $k$ -th outcome may be calculated from this formula:

$f_{k}^{o}={\frac {er_{k}-R(S^{o})}{\frac {D}{\beta _{k}}}}=p_{k}-{\frac {R(S^{o})}{\frac {D}{\beta _{k}}}}$ .

One may prove that

$R(S^{o})=1-\sum _{i\in S^{o}}{f_{i}^{o}}$

where the right hand-side is the reserve rate[clarification needed]. Therefore the requirement $er_{k}={\frac {D}{\beta _{k}}}p_{k}>R(S)$  may be interpreted as follows: $k$ -th outcome is included in the set $S^{o}$  of optimal outcomes if and only if its expected revenue rate is greater than the reserve rate. The formula for the optimal fraction $f_{k}^{o}$  may be interpreted as the excess of the expected revenue rate of $k$ -th horse over the reserve rate divided by the revenue after deduction of the track take when $k$ -th horse wins or as the excess of the probability of $k$ -th horse winning over the reserve rate divided by revenue after deduction of the track take when $k$ -th horse wins. The binary growth exponent is

$G^{o}=\sum _{i\in S}{p_{i}\log _{2}{(er_{i})}}+(1-\sum _{i\in S}{p_{i}})\log _{2}{(R(S^{o}))},$

and the doubling time is

$T_{d}={\frac {1}{G^{o}}}.$

This method of selection of optimal bets may be applied also when probabilities $p_{k}$  are known only for several most promising outcomes, while the remaining outcomes have no chance to win. In this case it must be that

$\sum _{i}{p_{i}}<1,$  and
$\sum _{i}{\beta _{i}}<1$ .

## Application to the stock market

In mathematical finance, a portfolio is called growth optimal if security weights maximize the expected geometric growth rate (which is equivalent to maximizing log wealth).[citation needed]

Computations of growth optimal portfolios can suffer tremendous garbage in, garbage out problems.[citation needed] For example, the cases below take as given the expected return and covariance structure of various assets, but these parameters are at best estimated or modeled with significant uncertainty. Ex-post performance of a supposed growth optimal portfolio may differ fantastically with the ex-ante prediction if portfolio weights are largely driven by estimation error. Dealing with parameter uncertainty and estimation error is a large topic in portfolio theory.[citation needed]

The second-order Taylor polynomial can be used as a good approximation of the main criterion. Primarily, it is useful for stock investment, where the fraction devoted to investment is based on simple characteristics that can be easily estimated from existing historical data – expected value and variance. This approximation leads to results that are robust and offer similar results as the original criterion.

### Single asset

Considering a single asset (stock, index fund, etc.) and a risk-free rate, it is easy to obtain the optimal fraction to invest through geometric Brownian motion. The value of a lognormally distributed asset $S$  at time $t$  ($S_{t}$ ) is

$S_{t}=S_{0}\exp \left(\left(\mu -{\frac {\sigma ^{2}}{2}}\right)t+\sigma W_{t}\right),$

from the solution of the geometric Brownian motion where $W_{t}$  is a Wiener process, and $\mu$  (percentage drift) and $\sigma$  (the percentage volatility) are constants. Taking expectations of the logarithm:

$\mathbb {E} \log(S_{t})=\log(S_{0})+(\mu -{\frac {\sigma ^{2}}{2}})t.$

Then the expected log return $R_{s}$  is

$R_{s}=\left(\mu -{\frac {\sigma ^{2}}{2}}\,\right)t.$

For a portfolio made of an asset $S$  and a bond paying risk-free rate $r$ , with fraction $f$  invested in $S$  and $(1-f)$  in the bond, the expected one-period return is given by

$\mathbb {E} \left(f\left({\frac {S_{1}}{S_{0}}}-1\right)+(1-f)r\right)=\mathbb {E} \left(f\exp \left(\left(\mu -{\frac {\sigma ^{2}}{2}}\right)+\sigma W_{1}\right)\right)+(1-f)r$

however people seem to deal with the expected log return $G(f)$  for one-period instead in the context of Kelly:

$G(f)=f\mu -{\frac {(f\sigma )^{2}}{2}}+((1-f)\ r).$

Solving $\max(G(f))$  we obtain

$f^{*}={\frac {\mu -r}{\sigma ^{2}}}.$

$f^{*}$  is the fraction that maximizes the expected logarithmic return, and so, is the Kelly fraction.

Thorp arrived at the same result but through a different derivation.

Remember that $\mu$  is different from the asset log return $R_{s}$ . Confusing this is a common mistake made by websites and articles talking about the Kelly Criterion.

### Many assets

Consider a market with $n$  correlated stocks $S_{k}$  with stochastic returns $r_{k}$ , $k=1,...,n,$  and a riskless bond with return $r$ . An investor puts a fraction $u_{k}$  of their capital in $S_{k}$  and the rest is invested in the bond. Without loss of generality, assume that investor's starting capital is equal to 1. According to the Kelly criterion one should maximize

$\mathbb {E} \left[\ln \left((1+r)+\sum \limits _{k=1}^{n}u_{k}(r_{k}-r)\right)\right].$

Expanding this with a Taylor series around ${\vec {u_{0}}}=(0,\ldots ,0)$  we obtain

$\mathbb {E} \left[\ln(1+r)+\sum \limits _{k=1}^{n}{\frac {u_{k}(r_{k}-r)}{1+r}}-{\frac {1}{2}}\sum \limits _{k=1}^{n}\sum \limits _{j=1}^{n}u_{k}u_{j}{\frac {(r_{k}-r)(r_{j}-r)}{(1+r)^{2}}}\right].$

Thus we reduce the optimization problem to quadratic programming and the unconstrained solution is

${\vec {u^{\star }}}=(1+r)({\widehat {\Sigma }})^{-1}({\widehat {\vec {r}}}-r)$

where ${\widehat {\vec {r}}}$  and ${\widehat {\Sigma }}$  are the vector of means and the matrix of second mixed noncentral moments of the excess returns.

There is also a numerical algorithm for the fractional Kelly strategies and for the optimal solution under no leverage and no short selling constraints.

## Criticism

Although the Kelly strategy's promise of doing better than any other strategy in the long run seems compelling, some economists have argued strenuously against it, mainly because an individual's specific investing constraints may override the desire for optimal growth rate. The conventional alternative is expected utility theory which says bets should be sized to maximize the expected utility of the outcome (to an individual with logarithmic utility, the Kelly bet maximizes expected utility, so there is no conflict; moreover, Kelly's original paper clearly states the need for a utility function in the case of gambling games which are played finitely many times). Even Kelly supporters usually argue for fractional Kelly (betting a fixed fraction of the amount recommended by Kelly) for a variety of practical reasons, such as wishing to reduce volatility, or protecting against non-deterministic errors in their advantage (edge) calculations.