Structural Distortion Of Volatility

The standard deviation of incremental rates of change in the price of financial assets, also known as volatility, is often heavily distorted. This article illustrates where that distortion originates, using real world data: weekly rates of change in the Standard & Poor’s Composite Index, from 8th December 2017 to 1st September 2023. Raw data are shown in Graph-01 further down.

Before you go there, depending on your familiarity with Normal Distribution, the schematic representation of it may serve as reference (skip). Distribution plots display raw data, just not in chronological order. Instead, the data are shown in ascending order of magnitude and grouped into distribution bins. Order of magnitude is shown horizontally, frequency vertically. The height of any column reflects the share of data found in that section of the value scale. All distribution bins have equal width, but since data cluster around the mean, bins near the mean are much more densely populated than those distant from it. The schematic representation labels the horizontal axis by distance from mean, expressed in multiples of standard deviation, not by value. Volatility is nothing other than the width of these columns. While volatility indicates width, only the mean provides the information needed to locate the plot, or pinpoint any share of data by value. Say the mean was +3.0% and volatility was +/- 2.0%. Then bin 2.0 would range from +5.0% to +7.0% on the value scale, Why? Because the inner boundary of bin 2.0 is the outer one of bin 1.0. Mean+width, or 3.0%+2.0% gets us to +5.0%. The outer boundary of bin 2.0 is an additional unit of width further out. Mean+width+width, or +3.0% + 2.0% +2.0%, arriving at +7.0%. If the mean remained at +3.0% but volatility increased to +/- 4.0%, then the same bin would range from +7.0% to +11.0% on the scale. But still being bin 2.0, it would contain, at least theoretically, the same content, as would have bin -2.0. It is the beauty of normally distributed data, that with merely two pieces of information, any one observation in the sample, or groups of observations, could be located. The instantly accessible information value of volatility as metric (one standard deviation) is to indicate distance across those 68.26% of observations that are situated at the centre. Volatility gives half that distance, a margin either side of the mean. The bulk of data (the aforementioned 68.26%) are found between these two value points: mean plus one standard deviation (upper inflection), and mean minus one standard deviation (lower inflection). Such information would not be recognised from a simple chronological display. 

In Graph-01, the area shaded in blue serves to identify the period of most extreme market gyrations observed globally at the beginning of the Covid-19 pandemic. Financial markets were initially stunned by measures taken to contain the virus, then intoxicated by an equally unprecedented fiscal and monetary stimulus introduced to mitigate the economic impact and to avoid a potential collapse of financial markets. Being so extra ordinary, this period is ideal for demonstrating how a mere handful of data entering, or leaving calculations impacts results. Below, fixed-length trails will show that. With every incremental advance, a new observation enters, the oldest one drops out.

Graph-02 shows trailing values (52 weeks) for mean incremental rate of change, and for one standard deviation (volatility). Again the aforementioned period of extreme gyrations is highlighted in blue. Note how volatility rises sharply before contracting again, 52 weeks later. That is caused by extreme readings dropping out of the calculation. The more moderate additional increase in volatility during this time is caused by a general increase in the dynamic of price changes. Two somewhat different causes mingle: the shock being one, and the following change in price patterns being another. In a larger sample size, say 100 weeks, the shock related data would remain part of the calculation for much longer after the actual event. In a smaller sample it would drop out correspondingly sooner. Few singular events are as extreme as the lock-down, and not all such events would impact market sentiment for as long, or cause a similar structural change for any length of time. In October 1987, there was no such subsequent increase in dynamic, just the statistical residue of a shock that lasted less than one single week. 

The first part of Illustration-01 shows pairs of related boundaries: lines depict extremes (high and low), while shaded areas refer to inflection points (upper and lower). The second part of Illustration-01 plots the distances between the boundaries seen in the upper part. Note the different sensitivity of these boundaries, and of the ranges they define, to data entering and leaving the calculation. The more peripheral, the more static such boundaries are. Extreme readings are, by virtue of their magnitude, not frequent, and not easily exceeded, or even challenged. Outliers tend to dominate the sample. But take the most extreme value out and another will take that place. Please also note that changes to corresponding boundaries are neither simultaneous, nor are they symmetric. And finally, make a mental note that other such paired boundaries (e.g.: third & first quartiles) would show similar characteristics. Within samples of market data, there is no genuine order, not even if the data broadly clusters around one central value. Often, there are several clusters inside a sample.

This observation is critical to my argument that volatility is structurally distorted, just as any parameter that owes its existence to an assumption of normally distributed incremental changes. Such distortions manifest not just here. They also show among separate measures of central tendency: mean and median. These two would be identical in a normally distributed sample. Incremental rates of change of traded financial assets are notorious for significant gaps between mean and median. That is because they are not equally sensitive to outliers. The mean, or arithmetic middle, is extremely sensitive to outliers. The median, as geometric middle, is almost insensitive to them. The median is sensitive to the number of outliers, but not to their dynamic value. While a calculation of standard deviation is more complex than that of the mean, it is still based on the mean, as well as all observations. Standard deviation is a derivative of each observation’s difference from the mean. Of course it is sensitive to outliers. Unlike the interquartile range, standard deviation is meant to reflect all data, and under certain conditions, that is it’s weakness. In financial markets, these conditions are indeed more than certain, they are virtually omnipresent.

We have seen in the schematic representation of a normal distribution what portion of data is supposed to be found where in the distribution. We can compare inflection points and corresponding percentile values. The lower inflection divides the sample into 15.9% and 84.1%, it corresponds almost exactly to the 16th percentile. The upper inflection divides the sample into 84.1% and 15.9% and corresponds almost exactly to the 84th percentile. In contrast to standard deviation, percentiles are not impacted by outliers at both ends, just by those at one end. That makes them less sensitive to distortions. The two sets of values should be fairly close to one another. If they are not, then that would indicate a degree of distortion. Illustration-02 shows just that.

In Illustration-02, each hemisphere has been giving its own chart in order to allow greater detail. Lines refer to the value for the inflection point, areas to the value of the corresponding percentile. The comparison is quite revealing:

Firstly, the gap between inflection and percentile is anything but constant. If the difference were the result of rounding, it would be. Secondly, the difference between the two sets of calculations in each hemisphere is at times rather substantial, then moderate, at times positive, and at other times negative. Again, this can not be explained with rounding. Thirdly, somewhat less obvious, the differences in each hemisphere do not coincide with one another, nor are they symmetrical. Once more, this can not possibly result from rounding.

Illustration-03 plots the net differential seen before, but does no longer differentiate by hemisphere. The net total is labelled excess volatility (upper part). In the lower part, standard deviation has been cleansed of that excess, yielding an audited version of volatility (red line), plotted together with the orthodox version of standard deviation (thin black line).

Standard deviation is supposed to express how reliable the mean is as predictor of any increment found outside the sample. But incremental rates of change in financial markets are not like other data where normal distribution is applied. When packaging boxes of oranges to obtain uniform unit weight within tolerable margins, the difference between the largest and smallest oranges in any plantation would not double, or half in size, from one harvest to the next. One box of 50 oranges will have the same weight as the next. Or in air transport, when calculating fuel requirements based on a known number of passengers with an unknown body mass, mean body mass of airline passengers would not jump by 20kg between outbound and inbound, or from British Airways to American Airlines. That said, it has shifted by that much. It took two generations and went unnoticed by an industry otherwise obsessed with risk. A commuter-sized aircraft crashed within minutes of take-off as a direct consequence of such oversight, killing everyone on board. So much on the predictive power of the mean value.

We should not work on the same general assumptions as Harry Markowitz did in 1952. In contrast to incremental rates of change of traded financial assets, data on oranges, or people’s body mass, sperm-count, and IQ, see no instant shift in whatever force makes population data what it is. Not even if they slowly expand or deteriorate. With such data, any given sample of sufficient size, has indeed an equal chance of being representative of the lot. Thus, there is a sufficiently reliable utility value in making deductions based on the distribution of their characteristics. Much less so with incremental rates of change of asset prices.

Financial market data change too quickly, by too much, and for too many conceivable reasons for normal distribution to work as it does with most other types of data. The nature of the data, and what causes it to be, does not fit the method. Using it anyway is akin to fuelling an aircraft with gasoline instead of kerosine: it will not fly, but you can make it explode. Arguably, volatility is always distorted, as is the mean, just not always to the same degree. It is all very well to audit calculations of standard deviation with neighbouring percentiles. But these are not some kind of gold standard, they are merely a tool used here to uncover what would otherwise remain obscure. I have not touched on other sources of distortion, just on one weakness of volatility among several.

The image carousel below displays the same information as Illustration-03 but does so for the ten national general equity indices that form the M10 universe of markets.