SHARE

Cumulated Sum (“CUSUM”) – Using statistical process control to monitor active managers

Most traditional performance measurement algorithms measure the performance of a portfolio over a fixed horizon. These traditional measures therefore suffer serious limitations, particularly when the most recent three to five year period is analysed. Good performance in some years can mask poor performance in others, making it difficult to estimate the portfolio’s current performance, and harder still to identify transitions from good performance to bad.

As inefficiencies appear, all investment processes will sometimes flourish, and at other times stagnate or stumble, necessitating a fundamentally different approach to performance measurement. Investors ought to continually estimate the current performance of their portfolios, and to rigorously re-evaluate each manager’s investment strategy as soon as they determine that it no longer adds value.

Dynamic performance measurement and change point detection are closely related and have long been addressed in the fields of sequential analysis and statistical process control (SPC). It was developed at Bell Labs in the 1930s by Walter Shewart and was originally used to monitor Western Electric’s telephone production lines.

In a seminal book, Wald (1947) shows how sequential analysis can greatly speed up the detection of change in a wide variety of systems. His insights have led to the development of many algorithms for change point detection.

One such procedure is the CUSUM, which is cumulated sum.

In fact, it is often claimed that it takes forty years to determine whether an active portfolio outperforms its benchmark. The claim is fallacious. In this article, we show how a statistical process control scheme known as the CUSUM, which is closely related to Wald’s [1947] Sequential Probability Ratio Test, can be used to reliably detect flat-to-benchmark performance in forty months, and underperformance faster still. By rapidly detecting underperformance, the CUSUM allows investors to focus their attention on potential problems before they have a serious impact on the performance of the overall portfolio.

A useful measure of performance must incorporate both risk and return. While there are many such measures of performance, the information ratio is the most useful and defined as:

 


relative to an appropriate benchmark.

The information ratio aims to show excess returns relative to the benchmark, as well as the consistency in generating the excess returns. The consistency of generating excess returns is measured by the tracking error. This ratio is primarily used as a performance measure by fund managers. In addition, it is frequently used to compare the skills and abilities of fund managers with similar investment strategies.

Hence, the information ratio is directly related to a manager’s skill.

Good and poor managers are defined by their information ratio:

Good manager: Information ratio >= 0.5

Average manager: Information ratio = 0

Very poor manager: Information ratio <= -0.5

The CUSUM procedure was first proposed by ES Page in 1954 as a method to rapidly detect changes in the mean of a noisy random process. While the mathematics of Page’s solution are complex, his insight is appreciated every time that a chart of monthly and cumulative excess returns graph for an active manager is compared.

Like any performance technique, the CUSUM has both strengths and limitations, and investment professionals, such as ourselves, must be aware of both to maximise its utility. These include:

 

 

CUSUM in practice:

  • CUSUM is an extraordinary and powerful tool, but extreme robustness can lead to abuse.
  • At Absa Multi Management, we do not run it on autopilot as a hire and/or fire tool, it is a monitoring and investigative tool.
  • We run additional tests when an alarm is raised and determine why the manager has underperformed.
  • We ensure that the benchmark is good.
  • The thresholds are chosen to work well in practice.
  • There are very few false alarms in practice.

Summary:

  • CUSUM detects underperformance rapidly, more than ten times faster than standard techniques.
  • It is a very powerful and reliable technique. It is extremely robust and works across various asset classes.
  • It focuses attention on managers who require it.

The CUSUM procedure explicitly recognises this trade-off between detection speed and the ability to discriminate between good and bad managers. It requires us to specify a threshold for the log-likelihood ratio, which no decision is made about the state of the investment process. When this threshold is crossed, sufficient evidence has accrued to conclude that its ability to add value has declined.

The threshold determines both the average time taken to detect underperformance by a bad manager and the rate of false alarms (incorrect identifiers of a good manager as bad). If the threshold is set low, underperformance is rapidly detected, but we experience a correspondingly high rate of false alarms. If, on the other hand, we set the threshold high, it takes longer to detect underperformance, but the rate of false alarms drops.

The upper and lower CUSUM threshold and expected run lengths:

 

 

 

Source: The above Upper & Lower CUSUM thresholds are computed using the approximate computational approach described in Woodall [1983], Vance [1986] and Yashchin, Philips, and Stein [1997] and verified using a simulator.

 

At the Upper CUSUM Threshold of 16.7, a good manager’s i.e., if information ratio is 0.5, log-likelihood ratio will cross the first level once in 25 months, while an average manager’s i.e., if information ratio is 0, log-likelihood ratio will cross it once in 26 - 41 months, and a very poor manager’s i.e., if information ratio is -0.5, log-likelihood ratio will cross it once in 42 - 84 months.

When the log-likelihood ratio crosses the Lower CUSUM Threshold of -23.6, an alarm is raised. At this point, sufficient evidence has accrued to warrant an investigation of the manager’s process. It takes only 41 months to detect flat to the benchmark performance. However, once in 84 months, or 7 years, a good manager’s log-likelihood ratio will cross this level by chance and a false alarm will be raised. Following the alarm, an investigation is launched. If, after a thorough investigation, it is concluded that the manager’s investment process is satisfactory and that the alarm is false, the CUSUM is reset and the monitoring process is restarted.

An example of a good manager on the upper CUSUM scheme and a good manager on the lower CUSUM scheme:

The above manager hits the Upper CUSUM Threshold on a weighted average of less than 25 months and hence is a Good Manager on the Upper CUSUM Threshold. The manager has not hit the Lower CUSUM Threshold in above 42 months and hence is a Good Manager on the Lower CUSUM Threshold.

An example of a very poor manager on the upper CUSUM scheme and a very poor manager on the lower CUSUM scheme:

The above manager hits the Upper CUSUM Threshold on a weighted average of more than 42 months and hence is a Very Poor Manager on the Upper CUSUM Threshold. The manager hits the Lower CUSUM Threshold on a weighted average of less than 25 months and hence is a Very Poor Manager on the Lower CUSUM Threshold.

This raises a very important question – what ought one to do when an alarm is raised? The CUSUM procedure detects underperformance, but offers no causal explanation for it. It should not, therefore, be used as a tool to engage and terminate asset managers based solely upon their recent performance. It is incumbent upon, and indeed imperative for us at Absa Multi Management to launch a thorough investigation into the manager’s investment process and to decide if it is likely to result in satisfactory performance going forward.

At Absa Multi Management, we pride ourselves on having different and innovative tools in our toolbox.