Just in time for me to start talking about taking natural logs of our macroeconomic data, come two supportive posts.
As a field, we do a huge disservice to students by not starting out with logarithms in both macro and micro principles. At SUU we’re probably worse than average at this (my personal opinion is that the cop-out “I’m not good at math” is especially prevalent in Utah).
Anyway, future Nobel Prize medium-list contender James Hamilton, writing at Econbrowser, offered up the post entitled “Use of Logarithms in Economics”.
The comments section of that post led me to Miles Kimball, writing at Confessions of a Supply-Side Liberal who offered up a long and detailed post entitled “The Logarithmic Harmony of Percent Changes and Growth Rates”. Here’s what he says:
… I let students in my Principles of Macroeconomics class in on the secret that logarithms are the central mathematical tool of macroeconomics.
Both mentioned a feature of percentage changes and logarithms that I kinda’ sorta’ understood, but never completely put together in my mind. I don’t know if there’s a formal name for this, but there’s a fallacy that gamblers often fall for (and that, of course, casinos love). Investors fall for this too. It’s this: if you start with $100, win 50% of that, bet it all, and lose 50% … you’ll end up down $25. Check the math if you don’t believe me. I also know that a change in a natural log of 0.5 approximates a 50% gain. What I didn’t put together is that this fallacy doesn’t occur if you use the natural log approximation: if you start out with any natural log, say 4, gain 0.5, bet it all, and lost 0.5, you’ll end up back at 4.
The algebra of this fallacy is that if you start with x, and go up by a percentage p, you end up with this:
But if you now go down by (1-p) you end up with this:
Note that this is only equal to x when either x or p is zero. So, I didn’t have to to choose $100: it would’ve worked with any amount that wasn’t zero. And I didn’t have to choose 50% for p: any percentage that isn’t zero will get us the same result.
The reason for the fallacy is that what we’re really thinking is that there’s a multiplier; call it y, so that after one round we end up with:
And after the second round we end up with:
Obviously, if you cancel, this will get you back to the initial x. And the above equation will work with logs.