We are getting on slippery slopes here. I would disagree that a person with $100k has less risk aversion than someone with $2M; I know people in either category. But it is rather irrelevant for the topic at hand.I agree with most of this but want to point out that decreasing RRA (relative risk aversion) with higher wealth is an argument for less leverage not more. Decreasing RRA with high wealth is equivalent to having smaller changes in utility from relative changes in wealth. For example, doubling from 5M to 10M or halving from 10M to 5M have very little difference in utility. Taking risk today (for someone with no future additional savings) can help increase the odds of very high wealth, but if there is little additional utility there is little incentive. A person with decreasing RRA at high wealth must have increasing RRA at low wealth. Taking lots of leverage increases the risk of very large loss at long horizons. If the person has increasing RRA at lower wealth they would view these outcomes extremely negatively.I would agree with all of this. But I would argue that both the time in the market and the reduced risk aversion conditions are likely fulfilled for most people reading this thread and seriously taking care of their financial independence; so the formula given is obsolete in most cases. I don't mean to make statements that include or exclude more or less affluent readers; but I think what matters is the savings rate relative to current consumption (which b.t.w. is often worse for very affluent people, who occasionally also go broke). That is simply because the savings grow exponentially, while the earnings from your career typically just grow with inflation after the first few years. Someone with less than 15-20 years time in the market can use the formula; probably for less than 15 only, because the yellow or blue curve with the earlier upswings would apply. 15-20 years is probably also the minimum to seriously benefit from of mHFEA. Probably not coincidentally, 15 years is about the maximum that momentum or "trending" market dislocations of valuation ratios last, or at least before the exponential function "wins" and dominates the returns.I think also constant relative risk aversion is not exactly right. I think my risk aversion likely peaks around 1M lifetime wealth but would be lower at both higher and lower lifetime wealth. Experiencing a 50% reduction in wealth 10M to 5M I would not view as bad as a 50% reduction from 1M to 0.5M. Likewise I wouldn't see a 50% reduction from 100k to 50k terribly because both are so bad it just wouldn't affect my life - I'm already pretty screwed if my lifetime investments end up at 100k.
So there are definitely some issues with the assumptions and the effect of time horizon. But I think Relative Risk Aversion is a powerful framework for determining equity allocation. Over periods of 20+ years we can probably cheat equity allocation a bit higher than the framework would tell us. Because two of the assumptions (independence of annual returns, and constant relative risk aversion even at very low or very high future wealth) aren't perfect for long horizons and potential large changed in wealth.
I think a potential argument against using RRA to calculate samuelson share is that RRA actually decreases at low wealth for most people. To take it to an extreme, losing half your money when you have $2 isn't as bad as losing half when you have $2M. Risk Aversion is both absolute and relative.
Because samuelaon assumes risk aversion is relative and constant, the worst case scenarios cost more utility than they would for most people. Even if the assumption of normality isn't perfect, the odds of losing 99% of your money are surely higher over 20 years than 1 year. Assuming constant RRA of 1 means that losing 99% instead of 98% has the same effect on utility as doubling your money (in opposite directions). I don't think that's true.
You are repeating your thesis "losing 99% of your money are surely higher over 20 years" that contradicts your charts (for moderate leverage). History has no precedence for either losing 99% in one year or in 20 or 30 years; in fact not only the probability of a 99% loss, but even the probability of a 1% loss was zero for ca. 20 year or longer horizons and no or moderate leverage, if I read your chart right. So you are wildly speculating here on something with no factual support whatsoever. The question becomes, what would the world look like if that were to happen. In many such catastrophic scenarios I think your portfolio would outlive you, in which case the utility of $1, $100k, and $1M would all be zero; you could then argue the $2M -> $4M increment, which you would likely realize in case a global catastrophic event does not happen, has more utility, even if the additional utility is small. (Obviously not saving at all, but consuming everything might have been best in this case in a catastrophic scenario; but that is off topic.) Another scenario that I could imagine that you would survive is political upheaval with a full or partial expropriation of property. If it's partial and based on original assets, you might be better off having $4M than with $2M at that time. I'm not going further here because we are not allowed to discuss politics, and this is pure speculation anyway. To stay on topic, the question whether a loss larger than historical precedence over your investment horizon can happen although the system survives, is speculation. There are a lot of empirical and theoretical arguments that the capital will keep growing exponentially in the long run, as long as the system survives.
Can you still answer my question about the ratios in your charts please?
Statistics: Posted by comeinvest — Thu Sep 19, 2024 12:58 am — Replies 3246 — Views 746105