A link to the full report is found below.
Synopsis
The essay questions the professional diligence deployed when volatility is used to portray levels of risk, without the additional information required to extract useful information content from the data. With the help of hypothetical examples and real-life data the article illustrates several weaknesses in the paradigm that equates volatility and risk. While conceding that the laws of normal distribution, if legitimate and properly applied, could be a component of risk assessment, the author warns of the hidden dangers in this metric.
Most notably, the article points at the logical inconsistencies behind the use of volatility as measure for risk. It can only be a legitimate metric if financial markets are random in nature, a prerequisite for normal distribution, the statistical concept behind volatility. But this assumption of randomness also implies that the results produced by the financial services industry are equally random. Random results can be obtained more easily from coin-tossing, effectively rendering the industry obsolete.
The author concludes that other metrics are better suited to measure risk and suggests that the continued popularity of volatility may be rooted in a conflict of interest on the part of the financial services industry. An industry dealing with random results does not benefit from metrics that could enable a sound qualitative assessment of its performance.
Put differently, volatility is really used to obstruct a lack of quality.
For the full report, click: