Math

You are currently browsing the archive for the Math category.

In part 1, I presented a proof that sin(2°)/2 > sin(3°)/3 using only trigonometry. In part 2, I presented a proof using the Taylor series expansion of sine.  I part 3 below, I present three additional proofs:  a proof using the average value of a function and the mean value theorem, a proof by Reddit user Dala_The_Pimp which proves $f(x)=\sin(x)/x=\mathrm{sinc}(x)$ is strictly decreasing on $(0,\pi/2)$ using derivatives, and a proof using concavity of $\sin(x)$ on $(0, \pi)$.

It suffices to prove that $f(x) = \sin(x)/x$ is deceasing on $[0, \pi/2]$.

Let $$\mathrm{aveVal}(g(t) , x) = \frac1x \int_0^x g(t) dt.$$

Lemma 1:  If  $g(t)$ is continuous and strictly decreasing on $[0,a]$ and $$f(x)= \mathrm{aveVal}(g(t) , x),$$then $f(x)$ is strictly decreasing on $[0,a]$.

Proof: Notice that if $g(t)$ is continuous and $f(x)= \mathrm{aveVal}(g(t) , x)$, then for all $x\in[0,a]$,

$$f'(x) = -\frac1{x^2} \int_0^x g(t) dt + \frac{g(x)}x$$

$$= \frac1{x^2} \left( -\int_0^x g(t) dt + x g(x)\right)$$

$$= \frac1{x^2} ( -x g(c) + x g(x))$$

where $c\in(0,x)$ by the Mean Value Theorem, so

$$f'(x)= \frac1x (g(x)  – g(c) ) <0$$

which proves the lemma.

Now,

$$\begin{aligned} \sin(x) – x &= \int_0^x (\cos[t] – 1)\, dt,\mathrm{\  so} \\ \sin(x)/x – 1 &=\mathrm{aveVal}(\cos(t) -1,  x).\end{aligned}$$

By Lemma 1, $\cos(t)-1$ is strictly decreasing on $[0, \pi]$ implies

$$\mathrm{aveVal}(\cos(t)-1, 0, x)$$is strictly decreasing on $[0, \pi]$ implies$$\sin(x)/x – 1$$ is strictly decreasing on $[0, \pi]$ implies

$\sin(x)/x$ is strictly decreasing on $[0, \pi]$.

Reddit post

Below I’m going to copy Reddit user Dala_The_Pimp’s solution Feb 8, 2023. (I am revising it just a little.)

Let $f(x)=\sin(x)/x$. Then
$$f'(x)= \frac{x\cos(x) -\sin(x)}{x^2}.$$
Now let $g(x) = x\cos(x) -\sin(x).$ Clearly $g(0)=0.$ Also
$$g'(x)=-x\sin(x)$$
which is clearly negative in $(0,\pi/2)$ thus $g(x)$ is decreasing hence $g(x)<g(0)$ and consequently $g(x)<0$, thus it is proven that since $g(x)<0, f'(x)<0$ and $f(x)$ is decreasing, hence $f(2^o)>f(3^o)$ and $\sin(2^o)/2^o >\sin(3^o)/3^o$ which implies
$$\sin(2^o)/2 >\sin(3^o)/3.$$

proof using CONCAVITY

We also can use concavity to prove that $\sin(2^o)/2 >\sin(3^o)/3.$ The second derivative of $\sin(x)$ is $-\sin(x)$, thus the second derivative of $\sin(x)$ is strictly negative on $(0, \pi)$. That implies that $\sin(x)$ is strictly concave on $(0,\pi)$. (See e.g. the proof wiki). By the definition of strict concavity, for all $\lambda\in (0,1)$,
$$\sin( (1-\lambda)\cdot 0 + \lambda 3^o) > (1-\lambda)\sin(0) + \lambda\sin(3^0).$$
Setting $\lambda = 2/3$, we get$$\sin( 2^o) > 2/3 \sin(3^0),\mathrm{\ \ and\ hence}$$ $$\sin( 2^o)/2 > \sin(3^0)/3.$$

In part 1, I proved that sin(2°)/2> sin(3°)/3 using trigonometry.  Here is a proof using power series.

In “Some Simple Lemmas For Bounding Power Series (Part 1)” Example 1, we applied the lemmas to prove that

$$x-x^3/6 + x^5/120>\sin(x) > x-x^3/6$$when $0<x<\sqrt{3}.$  (That’s actually true for all $x>0$, but I did not prove it.)

Let $\epsilon = 1^\circ$. Note that $\epsilon = \frac{\pi}{180} < 1/40.$

Then,

$$\begin{aligned}\frac{\sin(2^\circ)}2 – \frac{\sin(3^\circ)}3 &= \frac{\sin(2\epsilon)}2 – \frac{\sin(3\epsilon)}3\\&=\frac{ 3\sin(2\epsilon) – 2 \sin(3\epsilon)}6\\&>\frac{ 3 (2\epsilon – (2\epsilon)^3/6) – 2 (3\epsilon – (3\epsilon)^3/6 + (3\epsilon)^5/120 )}6\\&=\frac{ 3(2\epsilon – 4 \epsilon^3/3) – 2 (3\epsilon – 9\epsilon^3/2 + 81\epsilon^5/40 )}6\\&=\frac{ 6\epsilon – 4 \epsilon^3 – (6\epsilon – 9\epsilon^3 + 81\epsilon^5/20 )}6\\&=\frac{ – 4 \epsilon^3 + 9\epsilon^3 – 81\epsilon^5/20 }6\\&=\frac{ 5 – 81\epsilon^2/20 }6 \epsilon^3\\&>\frac{ 5 – 1/20 }6 \epsilon^3 \\&>0 \end{aligned}$$which proves sin(2°)/2> sin(3°)/3.

Often functions are defined or approximated by infinitely long sums which can be thought of as polynomials with an infinite degree. For example,
$$\cos(x) = 1- x^2/2! + x^4/4! +x^6/6! + ….$$
(where $n! = n \cdot (n-1) \cdot (n-2) \cdots \cdot1$ and $4!=4\cdot 3\cdot 2\cdot 1$.) These sums are usually referred to as infinite series, and you can often use the first few terms to approximate the sum of all of the terms. For example,
$$\cos(x) \approx 1 – x^2/2$$when $|x| < 0.2$

In my last post, I presented some bounds on the error that occurs when you approximate an alternating series with the first few terms of the series. An alternating series is a infinite sum where the sign of the terms alternate between positive and negative. In this post, I will present a fairly simple bound on the error of approximating an infinite series where the terms grow no faster than exponentially.

We will use this well known, useful fact (#87 in the “100 Most useful Theorems and Ideas in Mathematics“).

Fact 1: If $|x|<1$,

$$1/(1-x) = 1 +x +x^2 + \cdots.$$

Proof: Fix any $x\in(-1,1)$. Let $S_n = 1+x+\cdots+x^n.$ Then
$\lim_{n\rightarrow\infty} S_n$ exists by Ratio test, and
$$\begin{aligned} S_n (1-x) &= 1- x^{n+1} \\
\lim_{n\rightarrow\infty} S_n (1-x)&= \lim_{n\rightarrow\infty} (1- x^{n+1})\\
(1-x)\lim_{n\rightarrow\infty} S_n&= 1\\
\lim_{n\rightarrow\infty} S_n&= 1/(1-x).\quad\mathrm{□}
\end{aligned}$$

 

Lemma: Suppose $a_1, a_2, \ldots$ is a sequence of real numbers, $n$ is a positive integer, and $\alpha$ is a real number such that:

  1. $0\leq \alpha < 1,$
  2. $\alpha |a_i| \geq |a_{i+1}|$ for all integers $i\geq n+1$, and
  3. $S = \sum_{a=1}^\infty a_i$ exists.

Then $$\left| S – \sum_{i=1}^{n} a_i \right| \leq |a_{n+1}|/(1-\alpha).$$

Proof: First I claim that for all positive integers $j$, $$|a_{n+j}| \leq \alpha\,^{j-1} |a_{n+1}|.$$ This holds almost trivially when $j=1$. And if we assume that the claim holds for $j=k$, then
$$|a_{n+k+1}| \leq \alpha |a_{n+k}| \leq \alpha\; \alpha\,^{k-1} |a_{n+1} | = \alpha\,^k |a_{n+1}|$$ which proves that the claim holds for $j=k+1$, so the claim by induction.

Now by the claim and Fact 1, $$
\begin{aligned}
\left| S – \sum_{i=1}^{n} a_i \right| &= \left|\sum_{i=n+1}^\infty a_i\right|\\
&\leq \sum_{i=n+1}^\infty | a_i | \\
&\leq \sum_{i=0}^\infty \alpha\,^i |a_{n+1}| \\
&= |a_{n+1}|/ (1-\alpha)
\end{aligned}$$which proves the lemma.$\quad\mathrm{□}$

Examples

Example 1: By definition,

$$\exp(x) = 1 + x + x^2/2! + x^3/3! + \cdots.$$
If $|x| <1$, then the lemma holds for $n=2$ and $\alpha=1/3$, so
$$ | \exp(x) – (1+x) | \leq |x^2/2|/(1-\alpha) = \frac34 x^2.$$
We can also apply the lemma for $|x|<1$, $n=3$, and $\alpha = 1/4$ to get
$$| \exp(x) – (1+x+x^2/2) | \leq |x^3/6|/(1-\alpha) = \frac29 |x^3|.$$

$\mathrm{ }$

Example 2: By Fact 1 and integration,
$$-\log(1-x) = x + x^2/2+x^3/3+ x^4/4+\cdots$$when $|x|<1$. We can apply the lemma for $|x|<1$, $n=1,2,$ or 3, and $\alpha =|x|$ to get
$$\begin{aligned} |-\log(1-x) – x | & \leq |x^2/2|/(1-|x|),\\
|-\log(1-x) – (x+x^2/2) | &\leq |x^3/3|/(1-|x|),\mathrm{\ and}\\
|-\log(1-x) – (x+x^2/2+x^3/3) | &\leq |x^4/4|/(1-|x|).\\
\end{aligned}
$$

A few days ago, I wrote several proofs that  $\sin(2°)/2 > \sin(3°)/3$.  One of those proofs involves power series and two simple, useful lemmas.

Leibniz bound on alternating series (1682)

Theorem:  Suppose $a_1, a_2, \ldots$ is a sequence of real numbers such that:
1) $|a_i| \geq |a_{i+1}|$ for positive all integers $i$,
2) $a_1 >0$,
3) $a_j \cdot a_{j+1} <0$ for positive integers $j$ (i.e. the sequence alternates in sign), and
4) $\sum_{a=1}^\infty a_i$ exists.
Then $$0 \leq \sum_{i=1}^\infty a_i \leq a_1.$$

(See Theorem 2 in this paper.) The following two lemmas are almost the same and almost follow from the Leibniz bound. (If you make the strict inequalities in the Lemmas non-strict, then you can prove them in one sentence using the Leibniz theorem.)

Lemma 1: Suppose $a_1, a_2, \ldots$ is a sequence of real numbers such and $n$ is a positive integer such that:
1) $|a_i| \geq |a_{i+1}|$ for all integers $i\geq n$,
2) $a_n <0$,
3) $a_j \cdot a_{j+1} <0$ for all integers $j\geq n$ (i.e. the sequence alternates in sign), and
4) $\sum_{a=1}^\infty a_i$ exists.
Then $$\sum_{i=1}^\infty a_i > \sum_{i=1}^n a_i .$$

Lemma 2: Suppose $a_1, a_2, \ldots$ is a sequence of real numbers such and $n$ is a positive integer such that:
1) $|a_i| \geq |a_{i+1}|$ for all integers $i\geq n$,
2) $a_n >0$, and
3) $a_j \cdot a_{j+1} <0$ for all integers $j\geq n$ (i.e. the sequence alternates in sign), and
4) $\sum_{a=1}^\infty a_i$ exists.
Then $$\sum_{i=1}^\infty a_i < \sum_{i=1}^n a_i .$$

Proof: Here is a proof that does not use the Leibniz Theorem.

The hypotheses imply that $a_{n +2k +1}<0$, $a_{n +2k +2}>0$, and
$$ a_{n +2k +1} + a_{n +2k +2}<0$$
for all integers $k\geq 0$. So,
$$\begin{aligned}
\sum_{i=1}^\infty a_i &= \sum_{i=1}^n a_i + \sum_{i=n+1}^\infty a_i \\
&= \sum_{i=1}^n a_i + \sum_{k=0}^\infty a_{n +2k +1} + a_{n +2k + 2j } \\
&< \sum_{i=1}^n a_i.\quad ■
\end{aligned}$$

Example 1:  $$\sin(x) = \sum_{i=0}^\infty  (-1)^i x^{2i+1}/{(2i+1)!},$$so $$x> \sin(x) > x-x^3/6$$when $0<x<\sqrt{3}$ because the absolute value of the terms are decreasing for that interval.  (In reality, it is true for any real $x$, but you need more than the Lemmas above to prove that.) Furthermore,  $$x-x^3/6 + x^5/120 > \sin(x) > x-x^3/6 + x^5/120 – x^7/5040$$when $0<x<\sqrt{20}$.  (Once again, it is true for any real $x$.)

Example 2:  $$\cos(x) = \sum_{i=0}^\infty  (-1)^i x^{2i}/{(2i)!},$$so $$1> \cos(x) > 1-x^2/2$$when $0<|x|<\sqrt{2}$ because the absolute value of the terms are decreasing for that interval.

Example 3:  $$\log(1+x) = \sum_{i=1}^\infty (-1)^{i+1} x^{i}/i$$when $|x|<1$, so $$x> \log(x+1) > x-x^2/2$$when $0<x<1$ because the absolute value of the terms are decreasing for that interval.  Furthermore, $$x-x^2/2 + x^3/3 > \log(x+1) > x-x^2/2 + x^3/3-x^4/4$$when $0<x<1$ because the absolute value of the terms are decreasing for that interval.

Example 4:  $$\mathrm{arctan}(x) = \sum_{i=0}^\infty  (-1)^i x^{2i+1}/(2i+1),$$so $$x> \mathrm{arctan}(x) > x-x^3/3 $$when $0<x<1$ because the absolute value of the terms are decreasing for that interval.  Furthermore, $$x-x^3/3+x^5/5  > \mathrm{arctan}(x) >x-x^3/3+x^5/5  -x^7/7$$when $0<x<1$ because the absolute value of the terms are decreasing for that interval.

 

 

 

 

So, on Reddit, user Straight-Ad-7750 posted the question,

“Which is bigger without using a calculator: $\frac{\sin(2°)}{2}$ or $\frac{\sin(3°)}{3}$?”

(Note:  $2°= 2\cdot \frac{\pi}{180}$ and $3°= 3\cdot \frac{\pi}{180}$.)

There are many ways to prove that one is smaller than the other.  These proofs are great because each proof uses one or more useful lemma or commonly known identity.  Below we give a proof using only trig identities.  Proofs using calculus will be given in part 2.

 

Spoiler Alerts

The answer is stated in the paragraph below, so if you want to try to figure it out yourself, stop reading now!

Also, it turns out that this is a rather easy question to answer if you are a third or fourth year electrical engineering student because most electrical engineers know about the sinc function defined by $$\mathrm{sinc(x)} = \frac{\sin(x)}{x}.$$ But, let’s just assume that we don’t know about the sinc function and look at the proofs that do not require knowledge about the sinc function.)

 

Trig Proof

We can prove that $\frac{\sin(3°)}{3}$ is smaller just by using trig identities.
Lemma: cos(2°) < cos(1°)
Proof:  We will apply the cosine double angle formula, the fact that $\sin^2(1°)\neq0$, and the fact that $1>\cos(1°)>0$ to prove the lemma as follows:

$$\cos(2°) = \cos^2(1°) – \sin^2(1°) < \cos^2(1°) < \cos(1°). □$$
Theorem: $\sin(2°)/2 > \sin(3°)/3$.
Proof: By the fact $1>\cos(1°)>0$, the Lemma, the double angle formula for sine, and the sin of a sum formula,
3/2 = 1 + 1/2
> cos(1°) + 1/2*( cos(2°) / cos(1°))
= cos(1°) + sin(1°)cos(2°) / ( 2 sin(1°) cos(1°))
= cos(1°) + sin(1°)cos(2°) / sin(2°)
= [ sin(2°) cos(1°) + sin(1°)cos(2°)] / sin(2°)
= sin(3°) /sin(2°).
So,
$$3/2 > \sin(3°) /\sin(2°)$$
and hence
$$\sin(2°)/2 > \sin(3°)/3. □$$

 

Next week I hope to post two calculus based proof.  Both of those proofs have cool, useful, simple Lemma!

 

This is just a minor extension of my last post.
Why is $$1/\log(1+1/k) \approx k + 1/2\ ?$$
At first, I was surprised that $$1/\log(1+1/237) \approx 237,$$ but then I realize that $$\log(1+x) \approx x$$ if $|x|$ is small, so
$$1/\log(1+x) \approx 1/x,\mathrm{\ thus}$$
$$1/\log(1+1/k) \approx k.$$
Where does the 1/2 come from? Well, you’ve got to get a better approximation of $\log(1 +x)$ to find the 1/2. You can do this with calculus.  If $|x| < $1 and $|y| < 1$, then $$(1-x)\cdot( 1+ x + x^2 + x^3 + \ldots) = 1$$
$$1/(1-x) = 1+ x + x^2 + x^3 + \ldots$$
$$\int_0^y 1/(1-x) \;dy = y + y^2/2 + y^3/3 + \ldots$$
$$-\log(1-y) = y + y^2/2 + y^3/3 + \ldots,\mathrm{\, so}$$
$$\log(1+x) = x – x^2/2 + O(x^3)$$ using big O notation.  (Aside: $-\log( 1 – .001) = 0.0010005003335835335…$)
Now,
$$\begin{aligned} 1/\log(1+x)
&= 1/(x – x^2/2 + O(x^3)) \\
&= 1/x \cdot 1/(1 – x/2 + O(x^2)) \\
&= 1/x \cdot [ 1 + ( x/2 + O(x^2)) + O( ( x/2 + O(x^2))^2) ] \\
&= 1/x \cdot ( 1 + x/2 + O(x^2) + O( x^2) ) \\
&= 1/x \cdot ( 1 + x/2 + O( x^2) ) \\
&= 1/x + 1/2 + O(x).
\end{aligned}$$Replacing $x$ with $1/k$ gives$$1/\log(1+1/k) = k + 1/2 + O(1/k).$$

It is more work to prove sharper bounds.

I was rather surprised one day when I typed $$1/\log(1+1/237)$$ into a calculator and got 237.4996491…. I just thought it was strange that it was so very close to 237.5 but very slightly less.

I had been trying to find the maximum number of tanks that you can produce in the tank-factory game. You start the game with any number of factories. On each turn, you can either invest in more factories thereby increasing the number of factories by 10% or you can use the turn to produce one tank per turn per factory. The game lasts for $T$ turns with $T>10$. If you build factories for $k$ turns and build tanks for $T-k$ turns, then the total number of tanks produced is $$f(k) = f_0 \; 1.1^k (T-k)$$ where $f_0$ is the starting number of factories.

Perhaps surprisingly, the maximum value of $f(k)$ is attained both at $k=T-10$ and $k=T-11$. Mathematically[1],
$$\max\{ f(k) | k = 1,2, …, T\} = f(T-10) = f(T-11) \approx 3.9\cdot 1.1^T f_0.$$

But, $f(x)$ can also be thought of as a real valued function, so the maximum value of $f(x)$ over positive real numbers $x$ should be about half way between $x=T-10$ and $x=T-11$.
$$\max_{x>0} f(x) \approx f(T-10.5).$$
To find the precise maximum of $f$ over positive real numbers, we find the point on the curve $y=f(x)$ where the tangent line is horizontal (i.e. where the derivative is zero) as follows: (TFAE)
$$\begin{aligned} f'(x) &= 0 \\f_0 1.1^x \log(1.1) (T-x) – f_0 1.1^x &= 0 \\ \log(1.1) (T-x) -1 &= 0 \\ \log(1.1) (T-x) &= 1 \\ T-x &= 1/\log(1.1) \\ T- 1/\log(1.1) &=x.\end{aligned}$$
The max of $f(x)$ occurs at $x = T- 1/\log(1.1)$, but before we estimated that the max would occur at $x\approx T – 10.5$, so we can conlude that
$$1/ \log(1+1/10) = 1/\log(1.1) \approx 10.5.$$
Indeed,
$$1/\log(1.1) = 10.492058687….$$

You can use simillar reasoning to conclude that
$$1/\log( 1 + 1/k) \approx k + 1/2$$
for all positive integers $k$.

A more precise bound can be found with Taylor series. If you go through the math you can prove that for all positive real numbers $x$,
$$f(x) = 1/\log(1+1/x) = x + 1/2 – 1/(12 x) + e(x)$$
where $0< e(x) < 1/(24 x^2).$

 

Footnote:
[1] More generally, if $k$ and $T$ are postive integers, $k<T$, and $$g(k, n, T) = (1+1/k)^n (T-n),$$ then
$$\begin{aligned} \max\,\{ g(k,n, T)\; |\; n &= 0, 1,2, \ldots, T \} \\ &= g(k, T-k, T) \\&= g(k, T-k-1, T).\end{aligned}$$

Getting 4 under par

I really enjoy disc golf.  This year I have done the 9 hole Circleville course in State College Pennsylvania (USA) about 50 times (usually two or three times per outing) and I’ve gotten 3 under par at least three times, but I have never gotten four under par.  I would love to have a decent estimate of the likelihood of getting four under par.  There is a way to estimate this probability with polynomials.

Two Holes

Suppose that on hole one you have a 30% chance of birdie (one under par and a score of -1) and a 70% chance of getting par (a score of 0).  Suppose that on hole 2, you have a 20% chance of birdie and an 80% chance of par.  What is the probability of each possible outcome after completing the first two holes?

  • You might get two birdies which is a total of -2.  The probability of that is 0.3 times 0.2 = 0.06 = 6%.
  • You might get a par followed by a birdie for a total of -1.  The probability of that is 0.7 times 0.2 = 0.14 = 14%.
  • You might get a birdie followed by a par for a total of -1.  The probability of that is 0.3 times 0.8 = 0.24 = 24%.
  • The last possibility is that you get two pars.  The probability of that result is 0.7 times 0.8 = 0.56 = 56%.

(Technical note: we are assuming that the performance on each hole is statistically independent of the performance on the other holes.)

Notice that the probabilities of getting a score -2, -1, or 0, are 6%, 38%, and 56% respectively.  (The 38% comes from adding 14% to 24%).

Perhaps surprisingly, these three probabilities can be calculated with polynomials.  If we expand

$$( 0.3\, x + 0.7) (0.2\, x + 0.8), \quad\mathrm{then\  we\ get\ } $$

$$\begin{aligned} ( 0.3\, x + 0.7) (0.2\, x + 0.8)&= 0.3 x\,  (0.2 \,x + 0.8) + 0.7(0.2\, x + 0.8) \\&= 0.06 \,x^2 + 0.24 \,x + 0. 14\, x + 0.56 \\&= 0.06\, x^2 + 0.38\, x + 0.56. \end{aligned}$$

Nine Holes

I can use the same method to estimate the probability of getting four strokes below par on the 9 hole Circlesville course.  Let’s suppose that the probability of getting a birdie on any given hole is given by the table below.  (We will also optimistically assume that you always get par or a birdie on every hole.)

$$\begin{array}{cc}
\text{Hole} & \text{Birdie Probability} \\
\hline
1 & 0.04 \\
\hline
2 & 0.1 \\
3 & 0.03 \\
4 & 0.4 \\
5 & 0.25 \\
6 & 0.12 \\
7 & 0 \\
8 & 0 \\
9 & 0.3 \\
\end{array}$$

Now the corresponding polynomial is

$$\begin{aligned}p(x) = &(0.96 + 0.04 x)  (0.9 + 0.1 x) (0.97 + 0.03 x) (0.6 + 0.4 x)\\ &\quad\times (0.75
+ 0.25 x) (0.88 + 0.12 x)(0.7 + 0.3 x) .\end{aligned}$$

We can use Wolfram Alpha to expand $p(x)$ thusly

$$\begin{aligned} p(x) = 0.2323& + 0.4062 x + 0.2654 x^2 + 0.0823 x^3 + 0.0128 x^4 \\ +&0.00098 x^5 +0.000034344 x^6 + 4.32\cdot\, 10^{-7} x^7 \end{aligned}$$

We can conclude that my most likely result is 1 under par (40.6%) and the probability that I will get exactly 4 under par over 9 holes is about 1.28%.

 

(What is p(x)? Suppose that a tournament sponsor will give you $1 for getting par, x dollars for getting 1 below par, x^2 dollars for getting 2 below par, …, and x^9 dollars for getting 9 below par, then p(x) is the expected value for playing 9 holes in the tournament.

(You don’t actually need polynomials. In reality you are just doing convolution of the coefficients of the polynomials when you are multiplying the polynomials. It is not very difficult to modify this algorithm to account for bogies.)

DEFINITION OF THE LAMBERT W FUNCTION

The Lambert W function is a rather unique and useful function that satisfies a special property.

For every non-negative real number $z$, there exists a unique non-negative real number $x$ such that $$x\exp(x) = z.$$

If this condition is satisfied, we say that $x$ is the Lambert W of $z$, and we denote it by $$W_0(z) = x.$$

This function, $W_0$, is the “principal branch” of the Lambert W function. It accepts non-negative real numbers as inputs and provides non-negative real numbers as outputs. We express this in mathematical notation as $W_0:[0,\infty) \rightarrow [0, \infty)$. Alternatively, $W_0$ can be defined as the inverse of the function $$g(x)=x\exp(x).$$

LAMBERT GROWTH RATE

We can use the Lambert W function to solve the delayed growth differential equation $$y'(t) = \beta y(t-T)$$ where $\beta$ and $T$ are positive real numbers.

It is not difficult to show that $$y(t) = \exp(\alpha T)$$ solves the delayed growth differential equation if and only if $$\alpha = W_0(\beta T)/T.$$

We define the Lambert Growth Rate for $\beta$ and $T$ to be $$\mathrm{Lambda\ Growth\ Rate}=\alpha = W_0(\beta T)/T.$$ (Equivalently, $\alpha$ is the unique real number such that the derivative of $y(t) =\exp(\alpha t)$ is $y(t-T)$.)

CALCULATION OR  ESTIMATION

Often, we don’t have the Lambert W function readily available on our calculators or programming environments. So, how do we calculate or approximate it? Let’s look at some methods to compute or estimate this function.  Let’s assume $\beta = 0.2$ and $T=30$ for the methods below.

The various methods for calculating or approximating the Lambert Growth Rate are as follows:

  1. Wolfram Alpha: This is a computational knowledge engine that can be used to solve the equation above directly. You could type "solve a*exp(a*30) = 0.2" into the input box of Wolfram Alpha. The answer given by it would be approximately 0.0477468. Alternatively, to calculate $W_0( \beta T )/T$, you could type "W_0(0.2*30)/30".
  2. Python and Scipy: Python is a popular programming language and scipy is a library for scientific computation in Python. You can use scipy’s implementation from scipy.special import lambertw.
  3. Excel: You can access the Lambert W function with the MoreFunc add-in for Excel.
  4. JavaScript: In Javascript, you can use the math library’s implementation of the Lambert W function. The code would be math.lambertW(x).
  5. Approximations: If you can accept an error of 1 or 2 percent, there are a few approximations available. For instance, if $1.6 \leq x \leq 22$, the Lambert W function can be approximated by the function $$w_1(x) = (.5391 x – .4479)^{(1/2.9)}.$$ We can use this approximation to estimate the growth rate
    $$W_0( 30*0.2)/30= W_0(6)/30\approx (.5391\times 6 – .4479)^{(1/2.9)}/30\approx 0.047463321.$$ Also, if $0\leq x \leq 2$, another approximation function is $$w_2(x) = \frac{x}{(1+x)(1-0.109 x)}.$$
  6. Bisection MethodWe want to find the value of $\alpha$ where $$\alpha \exp(\alpha T) = \beta.$$ That value of $\alpha$ is the “root” of the function $$f(\alpha)=\alpha \exp(\alpha T) – \beta.$$ A root of a function $f(\alpha)$ is any $\alpha$ where $f(\alpha)=0$.  The bisection method continuously splits an interval in half to narrow down the root of a function. It assumes the function changes sign around the root, and iteratively refines the search interval.  In our case, $f(0)= -\beta<0$ and $$f(\beta) = \beta\exp(T\beta) -\beta>0,$$ so we know that $\alpha=0$ is too low and $\alpha=\beta$ is too high.  The bisection method would now test the midpoint of the interval $[0,\beta]$ which is $\beta/2.$  If $f(\beta/2)>0$ the bisection method begins again on the interval $[0,\beta/2]$ and if $f(\beta/2)<0$ the bisection method would continue with the interval $[\beta/2, \beta]$. Every iteration cuts the size of the interval in half.
  7. Newton-Raphson Method: The Newton-Raphson method is a popular root-finding algorithm that uses tangents to the function to find the roots. It can be used to improve the accuracy of the approximations above. The formula for estimating the solution of $f(x)=0$ is $$n(x) = x – f(x)/f'(x).$$For our problem $$n(x) = x-\frac{x e^{T x}-\beta}{(T x +1)e^{T x}}.$$

Simulation

 

In the game Master of Orion, the economic growth rate in the early part of the game can be estimated by $\alpha=W_0(T\frac{I}{C})$ where $I$ represents the income of a “mature” planet, $C$ is the cost of constructing a colony ship, $T$ is the number of years between the construction year of a colony ship and the year that the colonized planet is “mature”. I ran a simulation using typical but random values where $10\leq I\leq200$, $200\leq C\leq575$, and $10\leq T\leq90$. Every time I generated random $I$, $C$, and $T$ values, I would use either $w_1$ or $w_2$ to estimate the Lambert growth rate $W_0(TI/C)/T$. Let $x= TI/C$. If $x<1.8$, I used $w_2$ and I used $w_1$ otherwise. The average error using $w_1$ and $w_2$ was 0.0016. If I then applied $n$ to the approximation, then the average error was 0.00017, about 10 times more accurate. When I applied $n$ twice the average error dropped to 0.000005 and the worst case error was 0.0001. If you want 16 digits of accuracy, you need to apply the function $n$ five times.

Applying the Newton Raphson to estimate the Lambert growth rate for $T=30$ and $I/C = \beta = 0.2$ and, gives
$$
\begin{aligned}
W_0(I T/C)/C = W_0(6)/30 \approx w_1(6)/30 &\approx 0.047463321\\
W_0(6)/30 \approx n(w_1(6)/30) &\approx 0.047748535122\\
W_0(6)/30 \approx n(n(w_1(6)/30)) &\approx 0.047746825925 \\
W_0(6)/30&\approx 0.047746825863
\end{aligned}
$$

Summary

In this post we presented several methods for approximating the Lambert Growth Rate $$W_0(\beta T)/T.$$ If you don’t have Wolfram Alpha or access to a programming environment that includes the Lambert W function, then one of the best methods for finding the solution is the Bisection Method.  If $x=\beta T<100$, then using $w_1$ or $w_2$ approximates the solution to within 2%.  One iteration of Newton-Raphson typically reduces the error by a factor of 10.  More iterations of Newton-Raphson significantly improve the approximation.

 

The following function has come up twice in my research

$$f(a,b)=\frac{\ln(b)-\ln(a)}{b-a}$$ where $0<a<b.$

Now, I had known that if $a$ and $b$ are close, then $$f(a,b)\approx\frac{1}{\mathrm{mean}(a,b)}$$ and $$\frac{1}{b}<f(a,b)<\frac1{a}.$$ But last week, with a little help from GPT, I got some better approximations and bounds on $f(a,b)$.

Let $m=(a+b)/2$ and $\Delta = (b-a)/2$.

Below GPT and I derive the following

  • $$1/b < f(a,b) < 1/a,$$
  • $$1/m <  f(a,b) < 1/m + \frac{\Delta^2}{3m}\left(\frac{1}{m^2-\Delta^2 }\right),$$
  • $$ -\frac{2\Delta^4}{15a^5}< f(a,b)\; – \frac{1}{6}\left(\frac{1}{a}+\frac{4}{m}+\frac{1}{b}\right) < -\frac{2\Delta^4}{15b^5},\mathrm{\ and}$$
  • $$\begin{aligned} f(a,b) &= \frac{ \tanh^{-1}(\Delta/m)}{\Delta} = \frac1{m} \frac{ \tanh^{-1}(\Delta/m)}{\Delta/m}\\&= \frac{1}{\Delta} \left(\frac{\Delta}{m} + \frac{\Delta^3}{3m^3}+ \frac{\Delta^5}{5m^5} + \cdots\right)\\  &= \frac{1}{m} \left(1 + \frac{\Delta^2}{3m^2}+ \frac{\Delta^4}{5m^4} + \cdots\right) \end{aligned}$$

where  $$ \tanh^{-1}(y) = x$$ if and only if $$\tanh(x) := \frac{ e^x-e^{-x}}{e^x + e^{-x}} =y$$ for any $x\in\mathbb{R}$ and $y\in(-1,1)$.

(Alternatively, $$\tanh^{-1}(x) = \frac{1}{2} \ln (x+1)-\frac{1}{2} \ln (1-x)$$ where $\ln(x)$ is the natural log and $|x|<1$.)

The derivation

At first I tried to use Taylor Series to bound $f(a,b)$, but it was a bit convoluted so I asked GPT. GPT created a much nicer, simpler proof. (See this PDF). GPT’s key observation was that $$f(a,b)= \frac{\ln(b)-\ln(a)}{b-a} = \frac{1}{b-a}\int_a^b \frac{dx}{x}$$ is the mean value of $1/x$ over the interval $[a,b]$. (In truth, I felt a bit dumb for not having noticed this. Lol.)

GPT’s observation inspires a bit more analysis.

Let $$z= \frac{x}{m} -1,\mathrm{\ so\ \ } m(z+1)=x.$$ If $x=a$, then
$$z= \frac{a}{m} -1= \frac{m-\Delta}{m} – 1= -\frac{\Delta}{m}.$$
Similarly, if $x=b$,
$$z= \frac{b}{m} -1= \frac{m+\Delta}{m} – 1= \frac{\Delta}{m}.$$
Applying these substitutions to the integral yields
$$\begin{aligned}
\int_{x=a}^{x=b} \frac{dx}{x} &=\int_{z=-\Delta/m}^{z=\Delta/m}\frac{m\;dz}{m(z+1)} \\
&=\int_{z=-\Delta/m}^{z=\Delta/m}\frac{dz}{z+1} \\
&=\int_{z=-\Delta/m}^{z=\Delta/m} (1-z+z^2-z^3+\cdots)dz\\
&=\int_{z=-\Delta/m}^{z=\Delta/m} (1+z^2+z^4+\cdots)dz\\
&=\int_{z=-\Delta/m}^{z=\Delta/m} \frac{dz}{1-z^2}\\
&=\left.\tanh^{-1}(z)\right|_{z=-\Delta/m}^{z=\Delta/m} \\
&= \tanh^{-1}(\Delta/m) – \tanh^{-1}(-\Delta/m)\\
&= 2 \tanh^{-1}(\Delta/m).
\end{aligned}$$
(Above we twice applied the wonderful thumb rule $$\frac{1}{1-x} = 1+ x + x^2 +x^3+\dots$$ if $|x|<1$. See idea #87 from the top 100 math ideas.)

So,
$$\frac{\ln(b) – \ln(a)}{b-a} =\frac{ 2 \tanh^{-1}(\Delta/m)}{b-a}=\frac{ \tanh^{-1}(\Delta/m)}{\Delta}.$$

Furthermore,
$$
\tanh^{-1}(x) = x + x^3/3 + x^5/5 + \cdots,
$$
so
$$\begin{aligned}
f(a,b) = \frac{\ln(b) – \ln(a)}{b-a} &= \frac{1}{\Delta} \left(\frac{\Delta}{m} + \frac{\Delta^3}{3m^3}+ \frac{\Delta^5}{5m^5} + \cdots\right) \\
&=\frac{1}{m} + \frac{\Delta^2}{3m^3}+ \frac{\Delta^4}{5m^5} + \frac{\Delta^6}{7m^7} + \cdots
\end{aligned}$$
This series gives us some nice approximations of $f(a,b)$ when $\Delta/m<1/2$. We can also bound the error of the approximation $$f(a,b)\approx 1/m$$ as follows $$\begin{aligned}
\frac{1}{m} <\frac{\ln(b) – \ln(a)}{b-a}&=\frac{1}{m} + \frac{\Delta^2}{3m^3}+ \frac{\Delta^4}{5m^5} + \frac{\Delta^6}{7m^7} + \cdots \\
&<\frac{1}{m} + \frac{\Delta^2}{3m^3}+ \frac{\Delta^4}{3m^5} + \frac{\Delta^6}{3m^7} + \cdots \\
&=\frac{1}{m} + \frac{\Delta^2}{3m^3}(1 + \frac{\Delta^2}{m^2} + \frac{\Delta^4}{m^4} + \cdots )\\
&=\frac{1}{m} + \frac{\Delta^2}{3m^3}\left(\frac{1}{1-\frac{\Delta^2}{m^2} }\right)\\
&=\frac{1}{m} + \frac{\Delta^2}{3m}\left(\frac{1}{m^2-\Delta^2 }\right).\\
\end{aligned} $$

Example.

Let $a= 6/100$ and $b=7/100$. Then $m=13/200$, $\Delta = 1/200$,

  • $$\frac{\ln(b)-\ln(a)}{b-a}= \tanh^{-1}(\Delta/m)/\Delta \approx 15.415067982725830429,$$
  • $$1/m \approx 15.3846,$$
  • $$1/m + \Delta^2/(3 m^3)\approx 15.41496,\ \mathrm{and}$$
  • $$1/m + \Delta^2/(3 m^3) + \Delta^4/(5 m^5)\approx 15.4150675.$$
  • $$1/m+ \frac{\Delta^2}{3m}\left(\frac{1}{m^2-\Delta^2 }\right)\approx 15.41514$$

 

Applying Simpson’s Rule

We can also use Simpson’s rule to approximate $\ln(b)-\ln(a)$.  The error formula for Simpson’s rule is

$$\begin{align}\int_{a}^{b}g(x)\,dx&=\frac{\Delta}{3}[g(a)+4g(m)+g(b)]-\frac{\Delta^5}{90}g^{(4)}(\xi)\end{align}$$

for some $\xi$ in the interval $(a,b)$. Setting $g(x)=1/x$, $$I=\ln(b)-\ln(a),\quad\mathrm{ and }\quad h(a,b)= \frac{\Delta}{3}\left(\frac{1}{a}+\frac{4}{m}+\frac{1}{b}\right)$$ with $m=(a+b)/2$ and $\Delta=(b-a)/2$ gives

$$\begin{align} \ln(b) – \ln(a) &=\frac{\Delta}{3}\left(\frac{1}{a}+\frac{4}{m}+\frac{1}{b}\right)-\frac{\Delta^5}{90}\frac{24}{\xi^5} \\ I&=h(a,b)-\frac{4\Delta^5}{15\xi^5}\\I-h(a,b) &= -\frac{4\Delta^5}{15\xi^5}\\-\frac{4\Delta^5}{15a^5}&< I-h(a,b) < -\frac{4\Delta^5}{15b^5}.\end{align}$$

Now we divide by $2\Delta = b-a$ to conclude that
$$\begin{aligned}\frac{\ln(b) – \ln(a)}{b-a}&= \frac{1}{6}\left(\frac{1}{a}+\frac{8}{a+b}+\frac{1}{b}\right) + error \\&= \frac{1}{6}\left(\frac{1}{a}+\frac{4}{m}+\frac{1}{b}\right) + error \\&\approx \frac{1}{6}\left(\frac{1}{a}+\frac{4}{m}+\frac{1}{b}\right) -\frac{2\Delta^4}{15m^5}\end{aligned}$$
where
$$ -\frac{2\Delta^4}{15a^5}< error < -\frac{2\Delta^4}{15b^5}.$$

 

(Thanks to GPT and StackEdit(https://stackedit.io/).)

 

« Older entries