Because of the shape of return distributions, variance optimization fails to capture the true risk of a portfolio. The risk measure I prefer to use is Expected Shortfall. Expected Shortfall is defined as the mean of the left tail of the distribution, below some given alpha.
Beyond VaR:
We call the value at the apha Value-at-Risk (VaR). VaR has been much maligned over the last few years -- not always for the right reasons. Some people confuse all VaR measures as assuming a normal distribution. Not so. The definition is that is it some left tail alpha value of the expected P/L distribution. No where does it assume a normal distribution -- people just wrongly use a normal distribution when calculating VaR.
Jorion's definition of VaR fromValue at Risk: The New Benchmark for Managing Financial Risk, 3rd Edition says (I'm paraphrasing) VaR is the maximum expected loss at a given level of certainty during normal market conditions. That often is misinterpreted by management as THE number of the maximum loss. Jorion's definition has a positive spin that pushes the misinterpretation.
I prefer to but a negative spin on it. VaR is the minimum expected loss when you have a bad day. If I have a day that is, at least, in the alpha percential of crappy-ness, then I will lose MORE than VaR.
VaR is the floor, not the ceiling. That's where it has been misused.
So why not optimize on VaR? Short answer is that VaR is highly nonlinear. Add nonlinear instruments to your portfolio and you can no longer guarantee the concavity of the objective function.
Enter ES:
Expected Shortfall (ES) is the average of the values above the VaR value. That is, when I'm having a really bad day, what is my expected loss. If I say a 1 in 20 day is bad (alpha=.05) then I should have 12-13 bad days a year. If my portfolio was static and the return distributions unchanging (not likely), then the average of those 12-13 days should center on ES.
ES is still not perfect because you don't know how far to the left you can go. But it does factor in the fat, skewed tail that we see. It is a good starting point for risk analysis.
Because ES is a mean, it is guaranteed to be a concave function (I'm still looking for the reference -- I've seen it before and will update the post if I find it). So ES is a better measure of risk than VaR and we can trust optimization results from it. So let's build an optimization routine for ES.
nloptr:
nloptr is a nonlinear optimization library in R wrapping the GNU NLopt library. ES, while well behaved, is nonlinear. nloptr provides a number of nonlinear solvers. It's what we want.
For our test case, we will simulate a 4 variable normal distribution with 10,000 draws (correlation given below). We will build the ES function and a gradient function for ES. We will constrain portfolio weights to be between [0,1]. We will impose an equality constraint that the sum of the weights = 1.
To confirm the results, I ran this 100 times and averaged the resulting weights and objective value. I did the same in SAS IML. UPDATE: SAS code located here.require(MASS)require(nloptr)#Covariance structure for the simulation#give everything a std=.1n = 4corr = c(1, .7, .5, .1,.7, 1, .4, -.1,.5, .4, 1, .6,.1, -.1, .6, 1)corr = matrix(corr,n,n)std = matrix(0,n,n)for(i in 1:n){std[i,i] = .1}cov = std %*% corr %*% std#Simulate 10,000 drawssim = mvrnorm(n=10000,rep(0,n),cov)#feasible starting values of equal weightsw = rep(1/n,n)#ES function. Mean of values above alphaes = function(w,sim=NA,alpha=.05){ret = sort(sim %*% w)n = length(ret)i = alpha * nes = mean(ret[1:i])return(-es)}#linear equality constraint#note: nloptr requires all functions to have the same signatureeval_g0 <- function(w,sim=NA,alpha=NA) {return( sum(w) - 1 )}#numerical approximation of the gradientdes = function(w,sim=NA,alpha=.05){n = length(w)out = w;for (i in 0:n){up = w;dn = w;up[i] = up[i]+.0001dn[i] = dn[i]-.0001out[i] = (es(up,sim=sim,alpha=alpha) - es(dn,sim=sim,alpha=alpha))/.0002}return(out)}#use nloptr to check out gradientcheck.derivatives(w,es,des,sim=sim, alpha=.05)#function to optimize -- a list of objective and gradienttoOpt = function(w,sim=NA,alpha=.05){list(objective=es(w,sim=sim,alpha=alpha),gradient=des(w,sim=sim,alpha=alpha))}#equality constraint function. The jacobian is 1 for all variableseqCon = function(w,sim=NA,alpha=.05){list(constraints=eval_g0(w,sim=NA,alpha=.05),jacobian=rep(1,4))}#optimization optionsopts <- list( "algorithm" = "NLOPT_LD_SLSQP","xtol_rel" = 1.0e-7,"maxeval" = 1000)#run optimization and print resultsnl = nloptr(w,toOpt,lb = rep(0,4),ub = rep(1,4),eval_g_eq=eqCon,opts=opts,sim=sim,alpha=.05)print(nl)s = nl$solutionobj = nl$objective
From R we get
In SAS we get> apply(s,2,mean)[1] 0.101697131193604 0.420428046895474 0.000000000004212 0.477874821906712> mean(obj)[1] 0.1371
I am comfortable with the results.wgt0.1015076 0.4218076 -4.75E-20 0.4766848obj0.1375891
Final Thoughts:
I was only able to get the SLSQP routine to converge. This routine uses a local quadratic approximation of the function, runs a QP to update the variables, and then iterates. I am not sure why the others failed.
The number of routines available for problems with equality constraints are limited. On top of that, the NLopt documentation gives a number of other routines that should accept linear constraints but the R implementation throws an error. A number of augmented LM routines are available for equality constraints, but again the only one that worked used SLSQP as the sub-optimizer and produced the same result as SLSQP (in about 5x the time).
The time to run the optimization in R is high. During the 100 iteration sample it took from 1.5-10 seconds per loop depending on the internal iterations needed by nloptr. In all it took about 10 minutes. The SAS code ran in 45 seconds. I see the following as reasons:
- SLSQP is not as efficient as other routines. It has to approximate a hessian matrix numerically. Finding another routine that reliably converges could be a big help.
- My gradient function is numeric. Each time the gradient is computed it takes a large number of FLOPs. This is the same in SAS where the IML optimizer calculates numeric derivatives.
- As previously discussed, the linear algebra matrix multiplication in R is slow. This code relies heavily on it. SAS IML is optimized for matrix math.
If anyone has ideas on how to speed up the R processing, I would love to hear them. Maybe there are other, more efficient, nonlinear optimization libraries in R. I'm sure there are parts of my code that can be sped up as well.
Very interesting post! Do you have any plans to this with actual return data such as edhec provided in the Performance Analytics package or a basket of ETFs? The optimized portfolio could then be compared to other approaches such as relative strength, equal weight, min variance, etc.
ReplyDeleteRcpp and Rcpp Armadillo may be a good option to speed up the computations.
Min/low volatility strategies are gaining traction in the portfolio management community. I was planning on looking at minimum ES as a portfolio construction tool.
DeleteThis comment has been removed by the author.
ReplyDeleteThanks for the post. There is still one thing I do not understand this is why the expected shortfall is a concave function (of what?) while the VAR is not. Can you give me a reference on this?
ReplyDeletePORTFOLIO OPTIMIZATION WITH CONDITIONAL VALUE-AT-RISK OBJECTIVE AND CONSTRAINTS -- Pavlo Krokhmal1, Jonas Palmquist, and Stanislav Uryasev (2001)
DeleteThey reference
Optimization of Conditional Value at-Risk --Rockafellar and Uryasev, 2000
Thank you very much...
Deleteno problem.
DeleteThis one is also good (I've only read half of it but is on my list for reading after works today).
Shortfall as a risk measure: properties, optimization and applications -- Dimitris Bertsimasa, Geo-rey J. Laupreteb,
Alexander Samarov
thanks for the share. i wonder if this study is suitable for copula based optimization by changing the function sim to copula simulation? because i ve been trying to solve copula based optimization on R for months but still couldnot get a good result.
ReplyDeleteYou read my mind. I am in the process of putting together a post using copula simulations and this optimization for low volatility portfolio selection. I hope to have it up this weekend.
Deletei ll be looking forward. i am studying on comparing results of modern portfolio theory and copula theory but having problems to solve copula based optimization. i am looking for one day return but in copula based optimization weights of the stocks change so much which doesnot sound so good and also the risk increases which i am not sure it is normal or not.
DeleteRespect and I have a nifty proposal: Where To Start House Remodeling home renovation contractors near me
ReplyDelete