Investing

You are currently browsing the archive for the Investing category.

Last month I wrote a post about the fact that you should not make an investment in a game that does not pay for itself.  If an investment costs $I$ and gives you a return of $i$ per turn, then it will pay for itself in $I/i$ turns.  The return on investment (ROI) is $$r= i/I.$$ In some games we have several choices about how to invest, and generally speaking it is best to choose the investments with the best ROI.

If you can invest on any turn, the ROI investment rule of thumb is,

“Don’t invest if there are less than $$I/i=1/r$$ turns in left in the game excluding the current turn.”

 

the tank game

The tank game is one of the simplest investment games.  It illustrates a few common ideas in investment games:

  • Exponential Sharpening of the Axe
  • The optimal investment choice often depends only on the ROI for the turn and the number of turns left in the game.
  • Solving an investment game often involves finding the largest rectangle under the investment curve.
  • The investment-exploitation phase transition, dominating strategies, comparing two very similar strategies, null moves, and strategy swaps
  • Actually proving that a strategy is correct is a bit tricky.
  • Discrete vs. Continuous.

I will address some of these ideas in this post and the rest of the ideas in a follow up post.

 

Suppose that you start the game with an income of \$100 million per turn.  Each turn you have the two choices:

  • (investment option) investing all your income into factories and increasing your income by 10%, or
  • (don’t invest option) building tanks that cost one million dollars each.

Assume that it is also possible to build half a tank, or any other fraction of a tank, so if you spend \$500,000 on tanks, you get 0.5 tanks. If you spend \$2,300,000 on tanks, then you get 2.3 tanks. The game lasts for 27 turns and the object of the game is to maximize the number of tanks created.

Intuitively, you want to build up your factories in the first part of the game (Invest/Growth phase), and then transition to making tanks in the later part of the game (Exploitation phase).

Suppose that you build factory equipment for the first 5 turns, and then spend 22 turns building tanks.  After the first turn, you have \$110 million income (\$100 million original income plus \$10 million income due to the investment into factory equipment). After the second turn, your income would be \$121 million (\$110 million at the start of the turn plus \$11 million additional income due to investment). After the third turn you would have \$133,100,00 income, the fourth \$146,410,000 income, and finally, at the end of the 5th turn, your income would be \$161,051,000 per turn. If you then build tanks for 22 turns, then you would have $$22\cdot161.051 = 3543.122\ \ \mathrm{tanks}$$ at the end of the game.

 

The optimal strategy for the tank game using the rule  of thumb

The easy way to find the optimal strategy is to apply the ROI investment rule of thumb.  We should invest as long as there are more than $I/i=1/r$ turns in the game after the current turn.  In the tank game, you increase your income by 10% if you invest, so $r=0.10$ and $$1/r=10\ \ \mathrm{turns.}$$ On turns 1 through 16 there are more than 10 turns left in the game, so you must invest on those turns.  On turn 17, there are exactly 17 turns left in the game, so it does not matter whether or not you invest on that turn.  On turns 18, 19, … 27, there are less than 10 turns left in the game, so on those turns, you need to build tanks.

If you do invest for 17 turns, then your income would be $$\mathrm{income} = (1.1)^{17}\cdot \ \$100,000,000= \$ 505,447,028.50$$ per turn.  Then you could buy tanks for 10 turns giving you about 5054.47 tanks.

OptTank

Notice that the amount of money spent on tanks is the same as the area of a rectangle with height equal to the income at the end of the investment phase (turn 17) times the number of turns used to buy tanks.  Many investment problems are equivalent to finding the largest rectangle that “fits” under the income curve.

 

The investment-exploitation phase transition and dominating strategy swaps

If you actually want to prove that this is the optimal strategy, you should probably first prove that there is and investment phase followed by a building/exploitation phase.

We will prove that investment phase must come first by comparing two very similar strategies where we swap a Building action and an Investing action. Comparing two similar strategies and showing the one “dominates” the other is a common tactic for finding optimal strategies.  In game theory, we say that one strategy dominates another if it is always better no matter what the opponent does.  For the single player tank game, we will say that one strategy dominates another if it produces more tanks over the course of the game.

Option 1:  Build-then-Invest. Suppose that on turn $i$ that you build tanks and on turn $i+1$ you invest in factories.  Suppose that on turn $i$ that your income was $I$.  Then you would build $$\mathrm{tanks} = \frac{I}{\$1,000,000}$$ tanks on turn $i$ and your income would increase to on turn $i+1$ to $$I_{\mathrm{new}}=1.1\ I.$$

Option 2:  Invest-then-Build.  On the other hand, if you swap the two strategies on turns $i$ and $i+1$, then on turn $i$ your income would again increase to $$I_{\mathrm{new}}=1.1\ I,$$ but when you build the tanks on turn $i+1$ you end up with  $$\mathrm{tanks} = \frac{I_{\mathrm{new}}}{\$1,000,000}= \frac{1.1\ I}{\$1,000,000}.$$

For either option, you have the same income on turns $i+2, i+3, \ldots, 27$, but for Option 2 (Invest-then-build) you have 10% more tanks than option 1.  We conclude that Option 2 “dominates” option 1, so for the optimal strategy, a tank building turn can never precede an investment turn.  That fact implies that there is an investment phase lasting a few turns followed by an building phase where all you do is build tanks.

If we carefully apply the ideas in the ROI part 1 post, we can determine where the phase transition begins. Suppose that on turn $i$ we have income $I$ and we make our last investment to bring our income up to $1.1\ I$. The increase in income is $0.1\ I$ and that new income will buy $$\mathrm{tanks\ from\ new\ income} = \frac{0.1\  I (T-i)}{\$1,000,000}$$ new tanks where $T=27$ is the total number of turns in the game.  If we build tanks instead of investing on turn $i$ then we would make $$\mathrm{potential\ tanks\ on\ turn\ }i = \frac{I}{\$1,000,000}$$ tanks.  The difference is
$$\begin{aligned} \mathrm{gain\ by\ investing} &=  \frac{0.1\ I (T-i)}{\$1,000,000}\;  – \frac{I}{\$1,000,000}\\ &= \frac{0.1\ (T – i) \;- I}{\$1,000,000}.\end{aligned}$$

The gain is positive if and only if $$\begin{aligned} 0.1\ I (T-i) – I &> 0\\ 0.1\ I (T-i) &> I\\ 0.1\ (T-i) &> 1\\T-i &> 10\\T-10&> i.\end{aligned}$$

Remark: Reversing the inequalities proves that  the gain is negative ( a loss) if and only if    $T-10 < i$.

We conclude that no tanks can be built before turn $T-10=17$.  On turn $i=17$, $$0.1\ I (T-i) -I = 0.1\ I (27-17) -I =  0,$$ so the gain by investing is zero. It does not matter whether the player builds tanks or invests on turn 17.  After turn 17, the gain is negative by the Remark above, so you must build tanks after turn 17.

We have proven that the ROI investment rule of thumb works perfectly for the tank game.

This note just reviews the derivation of portfolios that maximize the Sharpe ratio.

Suppose that you have some stocks that you want to invest in. We will think of the returns of these stocks as being a random column vector $G$ in $R^n$. Suppose that $r=E[G]\in R^n$ is a vector of the expected return of the stocks and $C= E\left[ (G-r) (G-r)^T\right]$ is the covariance matrix of $G$ with the superscript $T$ indicating the transpose, thus $C\in R^{n\times n}$.

We will often want to maximize the Sharp ratio of a portfolio which is defined as the expected return of the portfolio minus the risk free return divided by the standard deviation.  In order to simplify the math a little, we will assume that the risk free return is 0 and $C$ is positive definite, $a^T C a>0$ for all vectors $a\in R^n\setminus\{0\}$. Thus for our purposes, the Sharpe ratio for an “allocation vector” $a\in R^n\setminus\{0\}$ will be defined $$\rho(a) := \frac{E[a^T G]}{\sqrt{E[ (a^T G - a^T r)^2]}} =  \frac{a^T r}{\sqrt{a^T C a}}.$$ We could say that the allocation vector is in dollars, so $a_1$ would be the dollar value of the stocks held in the portfolio for the first stock.  The value of $a_1$ could be negative indicating that the stock was shorted.

It is helpful to notice that the Sharpe ratio does not change if we double or triple the amount invested in each stock.  In fact, for any real number $\gamma\neq 0$ and any nonzero allocation vector $a\in R^n$, $$\rho(\gamma a)= \gamma \rho(a).$$ So, when maximizing $\rho$ we can restrict ourselves to vectors $a$ where $a^T C a=1$.

The matrix $C$ is real symmetric positive semidefinite, so it has a Cholesky decomposition $C=U^T U$ where $U$ is upper triangular.  Let $u= U a$.  Then $$1=a^T C a= a^T U^T U a = u^T u= ||u||^2, $$ so $u$ has norm 1. Thus if we want to maximize $\rho(a)$, it suffices (by restricting to vectors $a$ where $a^T C a=1$) to maximize $$\rho(a) = \frac{a^T r}{\sqrt{a^T C a}} = a^T r = u^T U^{-T} r$$ over all unit vectors $u$. (We use $U^{-T}$ to denote $(U^T)^{-1}$, the inverse transpose of $U$.)  The unit vector which maximizes $u^T U^{-T} r$ is simply $$u^*=   \frac{U^{-T} r}{|| U^{-T} r||}.$$ We can now generate an optimal allocation vector $a^*$ by

$$  a^* = U^{-1} u^*=  \frac{U^{-1} U^{-T} r}{|| U^{-T} r||}  = \frac{ (U^T U )^{-1}  r}{|| U^{-T} r||}  = \frac{ C^{-1}  r}{|| U^{-T} r||}.$$ The scalar factor $|| U^{-T} r||$ has no effect on $\rho$, so $$a^{**} =  C^{-1}  r$$ is also an optimal allocation vector.  Note that the Sharpe ratio of $a^*$

$$\rho(a^{**})=\rho(a^*)=\frac{(a^{*})^T r}{\sqrt{(a^{*})^T C a^*}}=(a^{*})^T r= \frac{r^T U^{-1} U^{-T} r}{|| U^{-T} r||}= || U^{-T} r||.$$

 

Example 1

Suppose that you want to invest in two uncorrelated stocks.  Assume that their expected returns are $r=( 0.001, 0.001)^T$ and their covariance matrix is $$C=\left(\begin{matrix} 10^{-4} & 0 \\ 0 & 10^{-4}\end{matrix}\right).$$  All optimal allocations $a$ of the stocks are multiples of $$a^{**} = C^{-1} r = \left(\begin{matrix} 10^{4} & 0 \\ 0 & 10^{4}\end{matrix}\right)( 0.001, 0.001)^T= (10, \  10)^T.$$ This merely indicates that the optimal Sharpe ratio is attained if and only if you invest the same amount in money in each of these stocks.

EXAMPLE 2

Suppose that you want to invest in two uncorrelated stocks.   Assume that their returns are $r=( 0.001, 0.0005)^T$ and their covariance matrix is $$C=\left(\begin{matrix} 10^{-4} & 0 \\ 0 & 10^{-4}\end{matrix}\right).$$  All optimal allocations $a$ of the stocks are multiples of $$a^{**} = C^{-1} r = \left(\begin{matrix} 10^{4} & 0 \\ 0 & 10^{4}\end{matrix}\right)( 0.001, 0.0005)^T= (10, \  5)^T.$$ This indicates that the optimal Sharpe ratio is attained if and only if you invest the twice as much money in the first stock and a nonzero amount of money is invested.  Note that Kelly Criterion often indicates that your bets should be proportional to the edge of your investments, so it gives similar advice.

EXAMPLE 3

Suppose that we have two gamblers.  The first gambler is willing to give you 2.2 times your wager if candidate A wins the election, but you lose the bet if candidate A does not win.  (I.e. if you wager $\$10$ with the first gambler, the your net gain will be $\$22 – \$10 = \$12$ if you win.)   The second gambler is willing to pay you twice your bet if candidate B wins and you lose your bet with the second gambler if candidate B loses.

This could be called an arbitrage situation.

Let’s assume that there is a 50% chance that candidate A will win and a 50% chance that candidate B will win.  We can think of each gambler as being a stock that we can invest in.  The expected value of the first gambler is 0.1  (i.e. if you wager ${\$}10$ with the first gambler, your expected net gain is 0.1*${\$}10$ = ${\$}1$.)  The expected value of the second gambler is 0.  The covariance matrix requires some computations.

$$C_{11} = E[ (G_1-r_1)^2] = 1/2 (  (1.2 – 0.1)^2 +  (-1 – 0.1)^2 ) = 1.21.$$

$$C_{12} = C_{21} =  E[ (G_1-r_1)(G_2-r_2)] = 1/2 (  (1.2 – 0.1)(-1) +  (-1 – 0.1)1 ) = -1.1.$$

$$C_{22} = C_{21} =   E[ (G_2-r_2)^2] = 1/2 (  (-1)^2 +  (1)^2 ) = 1.$$ $$C = \left(\begin{matrix} 1.21 & -1.1 \\ -1.1 & 1 \end{matrix}\right).$$

Interestingly, $C$ is not invertible.  This is because $(10, 11) C (10, 11)^T = 0$.  This means that if you wager $\$10$ with gambler 1 and $\$11$ with gambler 2, you will always win $\$1$.  If candidate A wins, then you gain $\$12$ from the first gambler and lose $\$11$ to the second.  If candidate B wins, then you lose $\$10$ to the first gambler and gain $\$11$ from the second.  Since you always win $\$1$, your volatility is zero and your Sharpe ratio is infinite.  In the derivation, we assumed that $C$ was positive definite, but in this example, it is not.

In a future post, I would like to give a few more examples and maybe even compare the optimal Sharp ratio allocation with a Kelly allocation.