Solved – Graphical lasso numerical problem (not SPD matrix result)

I am trying to apply glasso on a very simple as well as sparse dataset made by 60+ features and 30k+ observations. Here you can find it in a csv format, if you are interested in reproducing the issue.

I am using the sklearn implementation with very few lines of code, by trying different values for the regularization coefficient $alpha$:

for alpha in [0.00000001, 0.0000001, 0.000001, 0.00001, 0.0001]:     glasso_model = GraphLasso(alpha=alpha, mode='lars', max_iter=2000) 

What I am experiencing is that the model cannot fit a covariance estimate since it stops after raising an exception complaining about the non PSD nature of the problem:

/usr/local/lib/python3.4/dist-packages/sklearn/covariance/ in graph_lasso(emp_cov, alpha, cov_init, mode, tol, max_iter, verbose, return_costs, eps, return_n_iter)     245         e.args = (e.args[0]     246                   + '. The system is too ill-conditioned for this solver',) --> 247         raise e     248      249     if return_costs:  /usr/local/lib/python3.4/dist-packages/sklearn/covariance/ in graph_lasso(emp_cov, alpha, cov_init, mode, tol, max_iter, verbose, return_costs, eps, return_n_iter)     236                 break     237             if not np.isfinite(cost) and i > 0: --> 238                 raise FloatingPointError('Non SPD result: the system is '     239                                          'too ill-conditioned for this solver')     240         else:  FloatingPointError: Non SPD result: the system is too ill-conditioned for this solver. The system is too ill-conditioned for this solver 

If I try to do an mle of the covariance with another function by sklearn (which is btw the same function that the graph_lasso procedure uses), this matrix is indeed PSD. So, I suspect that the problem lies somewhere in the computation of the code.

Now I am normalizing or standardazing the data (zero mean, 1.0 var) the data before applying the method but the problem still persist.

Any idea about it? Am I missing some keypoint in applying the glasso. Is it possible to do something meaningful with another toolkit?

I ran into the same issue with some data I was using in my research- while I don't quite understand what leads to this mathematically/computationally, hopefully my answer and the code below helps:

Two comments on your problem:

  • The raw CSV file includes data fields which have not been de-meaned or scaled. Normalizing the data is a helpful step that is important for some types of processing. This can be accomplished with the sklearn StandardScaler() class.

  • The l1-regularized covariance implementation seems to be sensitive to instabilities when the empirical covariance matrix has a broad eigenvalue range. Your initial data has eigenvalues in the range of [0, 3e6].

  • After normalizing your input data, the eigenvalues of your empirical covariance matrix still span a relatively large range of about [0-8]. Shrinking this using the sklearn.covariance.shrunk_covariance() function can bring it into a more computationally acceptable range (from what I've read, [0,1] is ideal but slgihtly larger ranges also appear to work).

If anyone knows what's going on mathematically/computationally that causes this error, and what the caveats of shrinking the covariance matrix are in terms of the interpretation of the output, I'd love to hear your comments and improvements. However, the code below appears to both work for the problem presented by @rano, and the errors that I've run into with my research (~10k samples of data in the energy market).

import numpy as np import pandas as pd from sklearn import covariance, preprocessing  myData  = pd.read_csv('Data/weight_comp_simple_prop.df.train.csv') X = myData.values.astype('float64') myScaler = preprocessing.StandardScaler() X = myScaler.fit_transform(X) emp_cov = covariance.empirical_covariance(X) shrunk_cov = covariance.shrunk_covariance(emp_cov, shrinkage=0.8) # Set shrinkage closer to 1 for poorly-conditioned data  alphaRange = 10.0 ** np.arange(-8,0) # 1e-7 to 1e-1 by order of magnitude for alpha in alphaRange:     try:          graphCov = covariance.graph_lasso(shrunk_cov, alpha)         print("Calculated graph-lasso covariance matrix for alpha=%s"%alpha)     except FloatingPointError:         print("Failed at alpha=%s"%alpha) 

Similar Posts:

Rate this post

Leave a Comment