The post How to Use Log Tables appeared first on A Bit of Auld Maths.

]]>But how did people use these tables? It’s actually a small skill in and of itself. In this post, I’ll outline how log tables were used for most of the 20th Century, having by then evolved into their most streamlined form. I’ll use a set of four figure tables for these examples to keep things simple, but tables with higher precision were available, along with more sophisticated correction methods. This will probably only serve as a refresher for log table veterans.

We’ll stick with the basics for now. Below is a twin set of four-figure log and anti-log tables, in the common base of $10$, listing values $\log_{10}x$ for $x$ from $1.0$ to $10$ and $10^m$ for $m$ from $0.0$ to $1.0$. The values are given to 4 places of decimals, hence the term, four-figure tables. Each table consists of two pages, lying facing on another.

So, how were these tables read. Let’s focus on the log tables for now. Consider the following crop of the first page of the $\log$ table, showing the top left of the first page.

On the leftmost column you can see values of $x$ being listed, in differences of $0.1$, from $1.0$ down to $2.6$. On the top row, additional second place digits for $x$ are listed, along with another list to the right for third place digits — but more on these later.

Returning to the values of $x$ in the first column, the second column lists the corresponding values of $\log_{10} x$, minus a leading zero to the left of the decimal point. So for example: To calculate the value of $\log_{10}(1.4)$, simply travel down the leftmost column to $1.4$ and read off the digits next to this to obtain $\log_{10}(1.4)=0.1461$, which is the value correct to 4 decimal places. The image below illustrates this process, starting from the solid red block.

So far so good, but what if a more finely graded table in $x$ is needed? Suppose we want to calculate the log of $2.05$. Here’s where the columns at the top come in. These represent additional digits of $x$.

To find $\log_{10} (2.05)$, we navigate using the digits of $x$. First travel down to $2.0$ on the leftmost column. Now, staying on the same row as $2.0$, travel right along the columns until the first $5$ column is reached. The set of four digits at the intersection of the $2.0$ row and big $5$ column are $3118$. These represent the digits to the right of the (unprinted) decimal point. So $\log_{10}(2.05) = 0.3118$.

But it’s possible to obtain an even finer grading in $x$ by using the mean differences columns, with only a slight loss in accuracy. Suppose we want to find $\log_{10}(1.836)$. As before, we navigate by digits. First, travel down the leftmost column to $1.8$. Staying on this row, travel across to the first $3$ column. The digits there read, $2625$. But we’re not done.

With these four digits, and still on the same row, travel across to the second set of columns representing third place digits, and go to the smaller $6$ column. The digits listed are $14$; and this is the mean difference, which must be added to $2625$ to obtain

\[ 2625 + 14 = 2639\]

These are the estimated four digits of the result, and so $\log(1.836) = 0.2639$. The second column set of mean differences has allowed a table with a three digit grading of $x$ from $1.0$ to $10$, on only two facing pages. The log tables can be left open and 4-figure logs of numbers to $3$ significant digits can be looked up without having to turn a page.

The anti-log tables worked similarly. For example, suppose we want to find the log of $10^{0.5678}$. The tables are giving us four significant figures of a result lying between $10^0=1$ and $10^1=10$, but otherwise we navigate by digits as before.

Travelling down the left most column, we pass on to the second page until reaching the $5.6$ row. Then we travel across to the large $7$ column and read off $3690$, and then travel to the $8$ column in the mean differences to read $7$. Adding these two numbers gives $3690+7=3697$. So $10^{0.5678}=3.697$, as the result lies between $1$ and $10$.

So that’s how logs and exponents were found in the days before digital calculators. More finely graded values could be interpolated at a pinch. More advanced sets of tables also included higher differences to facilitate this, though most everyday tables stuck to just mean first differences.

Typically, there were also more than just logs and anti-logs in a set of tables. All kinds of commonly used functions could be listed. The image below shows the tables being used to calculate $\sin (11^\circ 34′) = 0.2005$.

All of the tables above are taken from the old Irish state mathematical tables — since discontinued. A more prolific set of four figure tables were the Godfrey and Siddons Four-Figure Tables, printed by Cambridge University Press, and which was apparently one of the few publications there to rival the Bible in terms of numbers sold. Tables like these were printed and reprinted in the hundreds of thousands and were in essentially universal use. The tables allowed look-ups of values of many special functions like: $\cos$, $\sin$, $\tan$, $x^2$, $\sqrt{x}$, $1/x$, $\ln x$, $e^x$, and of course common logs and anti-logs $\log_{10} x$ and $10^x$. By shelling out a little more, you could also get sets of 5 figure tables, which offered more precision, more functions, more data, and more mathematical formulas — in effect, the graphing calculators of their day.

All in the past now. Modern tables still include mathematical formulas, which is quite a good thing, as tables only get bigger and better the father you go in mathematics, and learning that they **should** be used is a useful lesson to learn in and of itself. But there will likely be no more **tables** in mathematics tables any-more, that role now being fulfilled by calculators and computers. Still, as with slide rules, old tables linger on in the hands of a few enthusiasts. And who knows, the old tables may yet have a few tricks to teach us.

The post How to Use Log Tables appeared first on A Bit of Auld Maths.

]]>The post Logs make an Impact: Kepler’s Third Law appeared first on A Bit of Auld Maths.

]]>It was only a few short years after Napier’s discovery of logarithms that they made their first, and lasting, impact on the sciences. They were instrumental in uncovering the most important of Kepler’s Laws of planetary motion.

The German astronomer Johannes Kepler had published his first two laws of planetary motion in 1609. These were

- I — The planets orbit the sun in ellipses, with the sun at one focus.
- II — The line joining a planet to the sun sweeps out equal areas in equal times.

But for nearly a decade afterwards Kepler, a prodigious calculator, laboured in vain to find some further relationship hidden in the masses of observational data that the late Tycho Brahe has gathered. In particular, he had yet to find a law linking the orbits of all the planets in some way.

Enter the logarithm. Kepler was an instant convert on discovering Napier’s logs in 1616, and would eventually publish tables of his own. But before this, he seems to have used logs to finally discover the elusive third law he’d been looking for.

As in the previous post, we’ll ‘cheat’ a little here by using modern notation (and data). Kepler had data on the orbital parameters of the first six planets, among them the orbital period $T$, and the semi-major axis $R$. Below is a table of suitably normalised values of $T$, $R$ for each of the planets in the solar system, and their logarithms $\log_{10}(T)$ and $\log_{10}(R)$. To make things clearer here, two planets which were as yet undiscovered in Kepler’s time — Uranus and Neptune — have both been included in the data.

\[

\begin{array}{l|c|c|c|c|}

\text{Planet} & \text{T (/yrs)}& \text{R (/$10^6$ km)} & \log_{10}(T) & \log_{10}(R)\\

\hline

\text{Mercury (m)} & 0.24 & 57.91 &-0.618 & 1.763\\

\text{Venus (V)} & 0.62 & 108.21 &-0.211 & 2.034\\

\text{Earth (E)} & 1.0 & 149.6 &0.0 & 2.175\\

\text{Mars (M)} & 1.88 & 227.94 & 0.274 & 2.358\\

\text{Jupiter (J)} & 11.86 & 778.34 &1.074 & 2.891\\

\text{Saturn (S)} & 29.45 & 1426.71 & 1.469 & 3.154\\

\text{Uranus* (U)} & 84.02 & 2870.63 &1.924 & 3.458\\

\text{Neptune* (N)} & 164.79 & 4498.39 &2.217 & 3.653\\

\hline

\end{array}

\]

Plotting these logarithm pairs gives what is called a log-log plot of the data, and this is also shown below.

The plot shows immediately that there is a linear relationship between the logs. In fact, the slope of the line in the plot can be measured to be $\cong 2/3$. This means that

\[\log_{10}(R) = \frac{2}{3} \log_{10}(T) +C \]

for some constant $C$. Solving this log equation therefore gives

\[ T^2 = \alpha R^3\]

where $\alpha$ is a constant. This power law is the planetary relationship being sought. Logarithms will always reveal any such power law relationships as straight lines on a log-log plot.

It seems likely that a log enthusiast like Kepler would have tabulated logarithms of his data values at some point. Upon doing so, he would have been one step away from discovering this last law. In 1619, three years after coming across logarithms, he published his third and final law of planetary motion

- III — The square of a planet’s orbital period is proportional to the cube of its semi-major axis.

This law completed the set by relating the orbits of all the planets around the sun together. In fact, the correct constant $\alpha = 4\pi^2/G M_{\odot}$, where $M_{\odot}$ is the mass of the sun, revealing that it is the sun which is the common cause of planetary motion in the solar system. This was the law which later allowed Newton in 1687 to show that it was in fact the inverse square gravitational pull of the sun that caused the planets to orbit it.

So, from the very start, logs were more than just a simple arithmetical tool. Their discovery opened up gateways to whole new avenues of research in the sciences. Logs have since come to be recognised as an important element of the scientific revolution in general. By now have become a ubiquitous part of both mathematics and the sciences. Not bad for a something Napier had originally intended as simply an aid to pen and paper arithmetic.

The post Logs make an Impact: Kepler’s Third Law appeared first on A Bit of Auld Maths.

]]>The post Napier’s Discovery of Logs appeared first on A Bit of Auld Maths.

]]>Logarithms were discovered by the Scottish mathematician John Napier in the early 1600’s. Napier was looking for ways to speed up arithmetic, and also developed what are now called Napier’s bones towards the same purpose. But we’ll focus on logs in this post.

Napier’s method of deriving logs is a little confusing. Napier came at the problem by considering the motion of two particles, one moving with constant velocity, and another whose motion slowed as it approached its destination. This was all in the days before calculus, so Napier had to explain how to relate quite complicated motions using only static or geometrical arguments. This is probably where a lot of the historical confusion comes from. Here, I’m just going to ‘cheat’ by using calculus to explain the basic ideas.

Napier’s two particles are linked by their motion in time. One moves to the right with constant speed $u$, travelling a distance $d=ut$ over time. Napier called this “arithmetic” motion. The second particle starts at one end of an interval of length $A$, and `falls’ back towards the other end $O$, but with a speed $v$ which is proportional to its current distance from $O$. Napier called this “geometric” motion.

Let’s ‘cheat’ and use calculus to try and understand this second motion. Let $s$ be the current distance of the particle from $O$. Then the proportional velocity is given by $v=-\alpha s$, for some constant $\alpha$. And since the particle starts at $s=A$, we just need to solve the differential equation

\[ v=\frac{ds}{dt}= -\alpha s ; \qquad s(0)=A\]

which has solution

\[s=Ae^{-\alpha t}, \qquad \text{or} \qquad s=A\beta^{-t}\]

So the first thing we can see is that the particle never reaches $O$. It ‘falls’ to the left forever. Secondly, we can see that the particle reduces its current distance to $O$ by a fixed proportion $\beta$ with each time step; $s(1)=A\beta$, $s(2)=A\beta^2$, $s(3)=A\beta^3$ and so on. Distance is behaving like a geometric sequence $ar^n$, hence the term “geometric motion”.

Meanwhile of course, the arithmetic particle has moved a distance $d=ut$, the distances $s$ and $d$ are linked by a common time. Now here comes the big step. For simplicity, let $A = 1$, and consider the linked distance pairs

\begin{align}

s_1&= \beta^{-t_1} & d_1=&u t_1\\

s_2&= \beta^{-t_2} & d_2=&u t_2

\end{align}

Next consider the product $s_1 s_2=\beta^{-(t_1+t_2)}=s_3$. This distance $s_3$ is also linked to a corresponding $d_3$ with

\begin{align}

s_3&= \beta^{-(t_1+t_2)} & d_3=&u (t_1+t_2)

\end{align}

If we had some way of translating between these corresponding $s$ and $d$ values then we could do the following procedure: take $s_1$ and $s_2$; lookup the corresponding $d_1$ and $d_2$; add these to obtain $d_3=d_1+d_2$; then reverse lookup to find $s_3$, which is then the desired product $s_1 s_2$. The whole process hinges on being able to translate between $s$ and $d$ values. In short, what we need is a set of *tables* for the functions

\[

d= -u \log_\beta \left( \frac{s}{A} \right) \qquad \text{and} \qquad s=A \beta^{-(d/u)}

\]

And this is just what Napier produced, out of nowhere in 1614 in the book *Mirifici Logarithmorum Canonis Descriptio*, containing 57 pages of exposition and 90 pages of detailed tables. This set of tables allowed you to hop back and forth between $s$ and $d$, and hence allowed you to turn multiplication in addition, division into subtraction, and all the rest which follows.

It took him the guts of 20 years to create.

Another cause of confusion in the histories is the seemingly strange base Napier chose. He apparently picked the equivalents of $A=u=10^7$, and $\beta= ( 1-10^{-7} )^{-10^7}$ — because, why not? Actually, $( 1-10^{-7} )^{-10^7} \cong e$, so Napier was quite close to what we now call natural logarithms, with $d=10^7 \ln ( s/10^7 )$, even though the number $e$ had not been discovered. But to cap it all off, Napier decided to make $\sin(\theta)=A-s$ and to give the logarithms corresponding to angles $\theta$ from $0^\circ$ to $90^\circ$ — because, why not? Most likely, he had uses in astronomy and geometry in mind for his tables.

In any event, logarithms turned out to be such an instant hit that all conceptual difficulties were immediately forgiven. Within a year of the ‘*Descriptio*‘ being published, Napier had collaborated with the English mathematician Henry Briggs to begin producing a set of tables in base $10$. This set of common (or Briggsian) logarithms was eventually published by Briggs in the *Arithmetica Logarithmica* in 1624, seven years after Napier death in 1617. Over the next 100 years or so, logarithms would be gradually refined into the form we know today, with new tables and slide rules being invented along the way.

But logarithms would not have to wait that long to make their first big impact on the sciences.

The post Napier’s Discovery of Logs appeared first on A Bit of Auld Maths.

]]>The post The Arithmetic of Logs appeared first on A Bit of Auld Maths.

]]>As they progress in mathematics, everyone eventually encounters logarithms. Nowadays these are introduced as a purely algebraic tool, useful for bringing variables down from powers and for solving index equations. But not so long ago logarithms were primarily an *arithmetical* tool, indispensable in basic calculation. Via tables or slide rules, they were used daily by millions to multiply and divide, and to take powers and roots. You don’t even have to be that old to actually remember doing this.

However, logarithms in basic arithmetic were made obsolete by the arrival of affordable electronic calculators in the late 1970’s. The world collectively put away its slide rules and log tables, and nowadays most people — myself included — will study the complete algebra of logs without ever learning about their original purpose and function. A little sad when you think about it.

So, this post is intended as an introduction to the basic arithmetic of logs for those still unaware of it; and as a refresher for older/enlightened readers. However, I’ll also show how logs live on as alternative to direct calculation. Even today, when the chips are down, the power of logs can still trump direct calculation by even the most powerful computers.

Most learn the algebra of logs from their definition as an inverse of exponentiation. If $b^m=x$, then $\log_b x = m$, and from this definition the basic rules of logs follow. Lists vary, but these rules usually include the following.

\begin{align}

\bullet&\ \text{I} &\log_b(xy)&= \log_b(x) + \log_b(y)\\

\bullet&\ \text{II} &\log_b(x/y)&= \log_b(x) – \log_b(y)\\

\bullet&\ \text{III}&\log_b(x^p)&= p\log_b(x)\\

\bullet&\ \text{IV} &\log_b(a)&= \frac{\log_c(a)}{\log_b(c)}

\end{align}

This is essentially as much as modern students learn (probably wondering all the while what on earth they’re doing). But originally, these rules were always paired with a set of log and anti-log tables. It was the combination of the rules and the tables which made logs so useful in the first place.

Two sets of tables were needed: A log table, listing out values of $\log_b(x)$ for a given base $b$; and also an anti-log table, which listed out values of $b^m$ in the same base. For practical purposes, the base was almost always 10. Logs were given for values of $x$ from 1.0 up to 10, while anti-logs were given for values of $m$ from 0.0 to 1. Real log tables had far more elaborate designs which I’ll discuss in a later post, but for now, we’ll consider them to be simple lists of values like so

\[

\begin{array}{c|c}

x & \log_{10}(x)\\

\hline

1.0 & 0.0000\\

1.1 & 0.0414\\

1.2 & 0.0792\\

1.3 & 0.1139\\

1.4 & 0.1461\\

1.5 & 0.1761\\

1.6 & 0.2041\\

1.7 & 0.2304\\

1.8 & 0.2553\\

1.9 & 0.2788\\

2.0 & 0.3010\\

2.1 & 0.3222\\

\vdots & \vdots\\

10.0 &1.0000

\end{array}

\qquad

\begin{array}{c|c}

m & 10^m\\

\hline

0.00 & 1.000\\

0.01 & 1.023\\

0.02 & 1.047\\

0.03 & 1.072\\

0.04 & 1.096\\

0.05 & 1.122\\

0.06 & 1.148\\

0.07 & 1.175\\

0.08 & 1.202\\

0.09 & 1.230\\

0.10 & 1.259\\

0.11 & 1.288\\

\vdots & \vdots\\

1.00 &10.000

\end{array}

\]

Depending on how large you were prepared to make your tables (read. how much time and money you were prepared to spend), your tables could have finer increments for $x$ and $m$, give more decimal places for the logs and anti-logs, and supply extra corrections for additional accuracy. You could also dispense with tables altogether and use slide rules — devices which became an art in and of themselves.

We’ll just consider tables for now, and how they were used with each of the rules above. In what follows below, it will be assumed that we have access to a more finely graded set of tables such as

\[

\begin{array}{c|c}

x & \log_{10}(x)\\

\hline

1.000 & 0.0000\\

1.001 & 0.0004\\

1.002 & 0.0009\\

1.003 & 0.0013\\

\vdots & \vdots

\end{array}

\qquad

\begin{array}{c|c}

m & 10^m\\

\hline

0.0000 & 1.0000\\

0.0001 & 1.0002\\

0.0002 & 1.0005\\

0.0003 & 1.0007\\

\vdots & \vdots

\end{array}

\]

Now comes the power of logs. First we must cast our mind back to the days before calculators (There will be a payoff for the modern age later on). Suppose you needed to calculate the product of $4.89 \times 3.37$ to a relatively good approximation. You could just multiply things out, but that would take a bit and you’d need to be quick and accurate with your times tables.

Log tables offered a faster option. A quick lookup in the log table reveals that $\log_{10}(4.89)=0.6893$ and $\log_{10}(3.37)=0.5276$. Now, consider taking the log of the unknown product to the base ten, and applying rule I.

\begin{align*}

\log_{10}\brkt{4.89 \times 3.37} &= \log_{10}(4.89)+\log_{10}(3.37)\\

&\cong 0.6893+0.5276

\end{align*}

So it follows that approximately,

\[ \log_{10}\brkt{4.89 \times 3.37} \cong 1.2169\]

We still don’t know what the product is, but taking the exponent of both sides and manipulating indices gives.

\[ 4.89 \times 3.37 \cong 10^{1.2169} = 10^{0.2169} \times 10^1 \]

So the product is an integer power of $10$, times the mantissa $10^{0.2169}$. Looking up $10^m$ in the anti-log tables gives $10^{0.2169} \cong 1.647$ and so

\[ 4.89 \times 3.37 \cong 1.647 \times 10^1= 16.47 \]

The true answer is

\[ 4.89 \times 3.37 = 16.4793\]

So we’ve obtained more or less the four most significant digits of the product without have to multiply anything at all. All we needed was three lookups, one addition, and a little index shifting. Since addition is much easier than multiplication, we can save ourselves a lot of effort and avoid many pitfalls, albeit at the cost of a little accuracy. Next comes division.

Formidable mental calculators, and purists accustomed to 11 digits of digital accuracy might scoff at the meager gains afforded by combining the tables with rule I. The first group at least can be easily won over by applying rule II.

This time, let’s consider calculating the quotient $\frac{4.89}{3.37}$ to a relatively good approximation. We are still in the days before calculators, so a direct calculation will involve performing long division. Since we all remember long division from school, we’ll use logarithms instead. Using the same lookups from before, using log rule II gives

\begin{align*}

\log_{10}\brkt{\frac{4.89}{3.37}} &= \log_{10}(4.89)-\log_{10}(3.37)\\

&\cong 0.6893-0.5276\\

\Rightarrow \quad \log_{10}\brkt{\frac{4.89}{3.37}} &\cong 0.1617

\end{align*}

Taking the exponent this time gives only a mantissa, which can be looked up directly in the anti-log tables.

\[ \frac{4.89}{3.37} \cong 10^{0.1617} \cong 1.451 \]

The result required three lookups and one subtraction. Carrying out explicit long division gives the quotient as being closer to

\[\frac{4.89}{3.37} \cong 1.45103857566766\]

So our log tables have again given us four significant of the result, and this time with much less work than a direct calculation. Subtraction is much easier than division.

And since in general long division of decimals gives only an approximate answer anyway, we haven’t lost much on principle either. We’re also much less likely to make a mistake in simple subtraction than in a long division. And if we need more digits, we can use the log approximation as an initial step in a faster division algorithm such as Newton’s method. All in all, logs and lookups are decisively outperforming pen and paper long division.

These methods really pay off if you need to multiply several numbers together, you’re only looking for approximate answers, and/or if you’re working in scientific notation away. Hence logs were extremely popular with engineers and anyone who just needed a “ballpark” answer. Suppose you need to calculate the following

\[ A=\frac{62830 \times 0.00931 }{23.15}\]

At first it appears that all three numbers are outside the range of the standard $1$ to $10$ log table. However, writing the numbers in scientific notation, then using rule I and remembering that $\log_{10}(10^n) = n$, their logs can still be found using just the standard table.

\begin{align}

\log_{10}(62830) &= \log_{10}(6.2830 \times 10^5) = 4+\log_{10}(6.283) = 4.7982\\

\log_{10}(0.00931) &= \log_{10}(9.31 \times 10^{-3}) = -3+\log_{10}(9.31) = -2.0311\\

\log_{10}(23.15) &= \log_{10}(2.315 \times 10^1 = 1+\log_{10}(2.315) = 1.3646

\end{align}

Applying rules I and II simultaneously now gives the log of $A$ as

\[\log_{10}(A) \cong 4.7982-2.0311-1.3646 = 1.4025\]

And using the anti-log tables for the mantissa as before gives

\[ A\cong 10^{0.4025} \times 10^{1} = 25.26\]

The true answer is closer to $A \cong 25.2677019438445$, but evidently life is a lot easier with logs.

Before moving on, I’ll quickly add the following calculation to show what happens if the logs ever add to a negative number. Suppose you wanted to calculate $B=23.15/62830$.

\begin{align}

\log_{10}(B) &\cong 1.3646-4.7982 = -3.4336\\

\Rightarrow B &\cong 10^{0.5664} \times 10^{-4} = 3.684 \times 10^{-4}

\end{align}

But we’re still not done! Pre-calculator log based arithmetic could handle powers and roots as well. This is done by using log rule III — in conjunction with the tables of course.

Suppose you want to calculate $21.34^5$. Naturally, we don’t want to multiply things out. Applying rule III gives

\[\log_{10}(21.34^5) \cong 5 \log_{10}(2.134 \times 10^1) = 5 \times 1.3292 = 6.646 \]

And so

\[21.34^5 \cong 10^{0.646} \times 10^6 = 4.426 \times 10^6 \]

The true answer is exactly $4425599.1543363424$, so again logs are paying off, although we did need to multiply this time.

Next, lets try calculating a root(radix). Something neatly awkward like $\sqrt[7]{77.77}$, which is the same as $77.77^{1/7}$. Proceeding using Rule III as before gives

\[\log_{10}(77.77^{1/7}) \cong \tfrac{1}{7} \log_{10}(7.777 \times 10^1) =\tfrac{1}{7} \times 1.8908 \cong 0.2701 \]

Which then leads to

\[\sqrt[7]{77.77} \cong 10^{0.2701} = 1.8625 \]

Which is accurate to four significant digits. This time we had to divide by $7$, but the result was well worth the price of this and only two look-ups.

It is difficult to overstate the impact of logarithms upon their arrival in the 1600’s. Their introduction made it feasible to calculate things like ratios, products, and cube roots on an everyday basis. But more on that later.

To those accustomed to the power and flexibility of modern computers, the methods above may seem rather quaint. Today it is possible to perform all of the above on a pocket calculators, faster and more accurately than with any set of tables, and without needing to remember log rules or to shift indices about.

However the power of logs never really went away, and if we venture just a small way beyond ordinary calculations, we’ll find that logs are still a relevant arithmetical tool today. Consider the problem of calculating

\[ 100! = 1 \times 2 \times 3 \times 4 \times \cdots \times 99 \times 100\]

Most modern pocket calculators still fail to compute this number as the product grows beyond their capacity long before $100$ is reached(Those possessing the latest models or working on a computer system can simply consider calculating $1000!$ instead in what follows.). Still, we’d like to have some kind of estimate of $100!$. Is it bigger than $10^{100}$? $10^{200}$? $10^{1000}$? We can’t just give up at the first calculator error.

And so, just as in days of old, it’s logarithms which come to the rescue, turning multiplication into addition and making the problem tractable again. Taking the logs of both sides — now using the calculator/computer as our “tables” — leads to the perfectly tractable sum

\begin{align}

\log_{10} 100! &= \log_{10} 1 + \log_{10} 2 + \log_{10} 3 + \cdots +\log_{10}100\\

&= 0+0.30103 + 0.47712+\cdots + 2 \cong 157.97

\end{align}

And so

\[100! \cong 10^{0.97} \times 10^{157} = 9.3325 \times 10^{157}\]

Specialised software exists that can compute the complete factorial, which turns out to be

\begin{align*}100! =

&93326215443944152681699238856266700490715968264381621468,\\

&59296389521759999322991560894146397615651828625369792082,\\

&7223758251185210916864000000000000000000000000

\end{align*}

And so simple logarithms can supply the correct order of magnitude and the first four significant digits of the result. The same procedure can estimate $1000!$ as

\[1000! \cong 4.02387 \times 10^{2567}\]

which should be enough to persuade almost anyone that logarithms still have a point to make in arithmetic. For any lingering doubters I leave the following estimates of one million and one billion factorial, obtained using logs and Stirling’s approximation

\[1,000,000! \cong 8.2639 \times 10^{5565708} \]

\[1,000,000,000! \cong 9.9046 \times 10^{8565705522}\]

I guarantee your pocket calculator won’t be able to do that!

So that’s a glimpse into the forgotten power of logarithms. Next time, I’ll discuss some of the history and impact of logs before describing the all important log tables and how to use them. Until next time.

The post The Arithmetic of Logs appeared first on A Bit of Auld Maths.

]]>