«Essays on Robust Portfolio Selection and Pension Finance Thesis submitted in partial fulfilment of the requirements for the degree of Doctor of ...»
2.2 Markowitz Portfolio Theory Portfolio selection is the problem of capital allocation over a set of available assets by maximizing the portfolio return and minimizing the corresponding risk. Although the benefits that occur by diversifying a portfolio have been reported since the beginning of financial markets, i.e. reducing portfolio risk, Markowitz (1952, 1959) is the first that formulated a mathematical framework for optimal portfolio selection, the so called mean-variance portfolio framework. Markowitz (1952) uses the expected value and the variance of the random portfolio return to compute the total return and the associated risk respectively, showing that the mathematical problem can be represented as a convex quadratic program (the well known efficient frontier) by assuming either an upper cap on the variance, or a lower bound on the portfolio return. The Markowitz portfolio allocation model had an apparent effect on the asset pricing and financial economic modeling. For instance, the Capital Asset Pricing Model, Sharpe (1964), Lintner (1965) and Mossin (1966), was a direct and logical result of the Markowitz portfolio theory. Sharpe and Markowitz won the Nobel Prize in Economic Sciences in 1990 for their scientific contribution to asset allocation and asset pricing.
2.3 Modern Portfolio Theory Criticisms In practice, the application of Markowitz (1952) portfolio theory requires the estimation of the mean and covariance matrix of the asset returns, and it has been widely reported that Markowitz theory is very sensitive to estimation errors in the input estimates (i.e. mean and covariance matrix), see for instance Green and Hollifield (1992), Goldfarb and Iyengar (2003), Ceria and Stubbs (2006), DeMiguel et al. (2009a), Glasserman and Xu (2013) amongst others. Merton (1980) also points out that an accurate estimation of the means is a much harder task than estimating precisely the covariance matrix, while Kallberg and Ziemba (1984) mention that statistical errors in means are approximately 10 times as significant as errors in covariances. In other words, it means that if the sample means and covariances are not accurate, the optimal asset weights computed via the Markowitz mean-variance portfolio optimization model contains significant errors, since the optimal solutions are highly sensitive to disturbances in the input data of the problem. As a direct consequence, the Markowitz model leads to ‘investment-irrelevant’ portfolios according to Michaud (1999), which are unstable, poorly diversified and are characterized by a weak out-of-sample performance.
This situation has been very well described and investigated in the financial portfolio literature. For example, Michaud (1999) points out that, although Markowitz portfolio theory is a useful theoretical tool for portfolio optimization that can be applied easily in practice, many practitioners and academics have kept this theoretical framework at a distance. This is because it works like an ‘error-phone procedure’ and leads to unreliable portfolios with large errors due to its extreme sensitivity to perturbations in the input data of the mathematical optimization process. Goldfarb and Iyengar (2003), Ceria and Stubbs (2006), Glasserman and Xu (2013), Xing et al. (2014) and others also highlight that this phenomenon is a direct consequence of the fact that the solutions computed by the mean-variance portfolio method are very susceptible to disturbances in the parameters of the optimization process, since these estimates are subject to large statistical errors. In addition, Kallberg and Ziemba (1984) investigate the case of misspecification in means, variances, covariances and investor’s utility functions in normally distributed asset allocation problems, while Best and Grauer (1991) provide some empirical evidence on the susceptibility of optimal mean-variance asset allocations to disturbances in the means. Also, Broadie (1993) investigates the effect of the statistical errors in the input estimates on the construction of the efficient frontier, Chopra (1993) examines the relation between portfolio diversification and estimation risk, Chopra and Ziemba (1993) examine the equivalent loss by using the estimated instead of the true parameters in the mean-variance portfolio process, while a more inclusive assessment on the effect of statistical errors on portfolio selection can be found in Ziemba and Mulvey (1998).
2.4 Portfolio Approaches Dealing with Estimation Risk Since portfolio theory is very sensitive to estimation risk that can often result in ‘error-maximized’ portfolios according to Michaud (1999), there are six main approaches that deal with estimation errors in the input parameters in an attempt to constructing superior portfolios with better characteristics according to DeMiguel et al. (2009b). These approaches are described below. The first approach involves alternative estimates of the means and covariance matrix, the second sets constraints on portfolio weights, the third uses simulations to generate alternative input data (portfolio re-sampling), the forth computes optimal combinations of portfolios, the fifth uses moment restriction techniques to eliminate estimation risk, and finally the sixth approach involves worst-case portfolio optimization (robust optimization).
2.4.1 Bayes’ Estimators The first approach tries to improve portfolio performance in the presence of estimation risk by altering the estimation of the means, variances and covariances (e.g. via alternative estimates). For instance, Black and Litterman (1992) as well as Drobetz (2001), Bessler et al. (forthcoming) and other studies combine neutral returns and subjective returns (views) by allowing investors to provide estimates for some asset returns or staying neutral on some others. The reliabilities of return estimates can be quantified and incorporated in the mean-variance portfolio optimization process. Furthermore, Jobson et al. (1979) try to enhance the performance of Markowitz portfolios by using James-Stein type estimators in the portfolio optimization problem, while Jorion (1986) and Frost and Savarino (1986) propose empirical Bayes estimators for the means, variances and covariances and try to eliminate extreme asset allocations (corner solutions) by reducing estimation risk.
More recent studies such as Ledoit and Wolf (2003, 2004) propose a shrinkage approach to the covariance matrix estimator by using factor and Bayesian models, and Kourtis et al. (2012) attempt to improve portfolio performance by shrinking directly the inverse covariance matrix using two non-parametric methods.
2.4.2 Constraints on Portfolio Weights The second approach attempts to eliminate estimation errors by setting constraints on the portfolio weights. Frost and Savarino (1988) provide evidence that estimation risk is significantly reduced by constraining asset weights in mean-variance portfolio selection strategies. Board and Sutcliffe (1994) compare the Bayes-Stein estimation model with seven alternative estimation methods, finding that there is little to select amongst them when short selling is prohibited. Also, some recent studies use more sophisticated technicalities by imposing constraints on portfolio norms, such as DeMiguel et al. (2009a). For instance, Brodie et al. (2009) apply l1 norm (the taxicab norm – defined as the sum of the absolute values of portfolio weights) constraints on asset weights within the portfolio optimization process to encourage the construction of sparse portfolios, e.g. portfolios with only a few active assets (assets with nonzero weights). In addition, Tola et al. (2008) and Xing et al. (2014) impose constraints on a combination of norms on portfolio weights l1 and l (maximum norm), with the latter to be defined as the maximum absolute value of the portfolio weights, in an attempt to construct sparse-style portfolios, e.g. sparse portfolios as explained above by eliminating at the same time the possibility of large weights to be allocated in just a few assets. Finally, It has been reported in the literature that sparse-style portfolios are often very well diversified and usually have a better outof-sample performance in terms of risk and risk adjusted return in comparison to naïve forms of portfolio optimization, see Xing et al. (2014) and others.
2.4.3 ‘Resampled Efficiency’ The third approach computes optimal portfolios by generating many data sets with simulation techniques (e.g. Monte Carlo), with the overall solution given by averaging these optimal portfolios. In particular, first Michaud (1999) proposes the so-called ‘Resampled Efficiency’ (RE) technique. ‘Resampled Efficiency’ uses Monte Carlo simulation methods to satisfactorily replicate parameter uncertainty in an attempt to compute optimal mean-variance portfolios with better out-of-sample characteristics. Scherer (2002) provides a comprehensive review of the concept of portfolio resampling proposed by Michaud (1999), and indicates some possible weak points of the ‘Resampled Efficiency’ portfolio technique. In addition, Becker et al.
(2015) carry out a complete simulation study with both constrained and unconstrained portfolio optimization processes for a variety of estimators. Although their empirical findings indicate that Markowitz (1952) overall performs better than Michaud (1999), they also provide strong evidence that the Markowitz meanvariance portfolio framework is more sensitive than Michaud (1999) to changes on asset weights’ constraints and to different estimators used for the mean and variance of asset returns.
2.4.4 Optimal Mixture Portfolios The forth approach computes portfolios that are actually combinations of other portfolios, such as the minimum variance, the mean-variance and the 1/N portfolios.
Hence, the intuition behind these ‘mixture’ portfolios is to shrink directly the portfolio weights. For instance, Kan and Zhou (2007) are motivated by the idea that estimation risk may not be efficiently diversified by holding only a combination of the tangency portfolio (the portfolio with the highest Sharpe ratio) and the risk-free asset, and propose the ‘three-fund’ portfolio rule in the class of mixture portfolios that combine the mean-variance and minimum-variance portfolios, in which the functionality of the ‘third’ fund is to eliminate estimation risk. Furthermore, DeMiguel et al. (2009b) differentiate their position from Kan and Zhou (2007), and propose a mixture of equally weighted and minimum-variance portfolios. Their main intuition is to put more emphasis on the estimation of covariances instead of the means given that it is well accepted that the estimation of expected returns is a much more difficult task than covariances.
2.4.5 Portfolios with Moments Restrictions ’The fifth approach attempts to decrease the negative effects of estimation risk by constructing portfolios with moment restrictions. DeMiguel et al. (2009b) describe 3 different portfolio strategies that set restrictions on the estimation of the statistical moments of the asset returns, and these are the well-known minimum variance portfolio, the value-weighted portfolio implied by the market model and portfolios constructed by asset-pricing models with unobservable factors. For the latter strategy, MacKinlay and Pastor (2000) show that the covariance matrix of the residuals error terms of the factor model contains any resulting mispricing due to unobservable factors.’’ 2.4.6 Robust (Worst-Case) Optimization Robust optimization is a relatively new numerical method that has grown thanks to the rapid improvement of the computing technology in the last few years. Although robust optimization overlaps with stochastic and dynamic programming, it can be assigned to its own category and consists the forth approach of dealing with estimation risk in the broad area of portfolio optimization. Robust optimization adopts a maximin approach and formulates ‘worst-case’ optimization problems, the so called ‘robust counterparts’. In particular, it assumes that the uncertain/stochastic input parameters of the portfolio optimization process are not known with certainty, but lie within uncertainty sets. Hence, it tries to eliminate the possibility of selecting portfolios that promise good performance due to estimation errors by computing optimal portfolio solutions under the assumption that the uncertain input data of the optimization problem take the worst-case values within these uncertainty structures (worst-case scenario). At this point, we also have to point out that the size and shape (e.g. interval, ellipsoidal, polygonal) of the uncertainty sets play a major role in this numerical method since they alter the level of conservativeness of the asset allocation, characterize the risk preferences of investors and most importantly result in computationally tractable (e.g. easily solved) mathematical programming problems. For further technical details of the mathematics of robust optimization, we refer to Fabozzi et al. (2007) amongst others.
Robust optimization has attracted significant interest in recent years by institutional investors and academics since they consider it is a strong and very efficient method for computing optimal asset allocations subject to estimation risk in the input data, see for instance Ben-Tal and Nemirovski (1998). So far, robust optimization has been applied only in equity portfolio management by taking into account uncertainties due to estimation errors in the input parameters as far as mean-variance portfolio strategies concerned. Gabrel et al. (2014) describe the recent advances in worst-case optimization in all areas of science (e.g. Engineering, Medicine, Finance etc.) since 2007.