LIME is a recent method that claims to help explaining individual predictions from classifiers agnostically. See e.g. arxiv or its implementation on github for details.

I am trying to understand what exactly it outputs. For that, I am using a trivial example: logistic regression.

Consider the following set of events:

`data = [] for t in range(100000): a = 1 - 2*numpy.random.random() # U(-1, 1) b = 1 - 2*numpy.random.random() # U(-1, 1) noise = numpy.random.logistic() c = int(a + b + noise > 0) # the target data.append([a, b, c]) data = numpy.array(data) x = data[:, :-1] y = data[:, -1] `

This is a latent logistic process with parameters $a_0 = 0$, $a_1 = a_2 = 1$, of which logistic regression assymptotically fits.

Let us fit the data using logistic regression:

`classifier = sklearn.linear_model.LogisticRegression(C=1e10) # C=inf => no regularization classifier.fit(x, y) print(classifier.coef_) # [[ 0.99092809 1.00551462]] `

Now, lets apply LIME to it:

`explainer = lime.lime_tabular.LimeTabularExplainer(x, feature_names=['a', 'b']) instance = numpy.array([1, 1]) explanation = explainer.explain_instance(instance, classifier.predict_proba, num_samples=100000) print(explanation.as_list()) `

The result I get is something like this:

`[ ('a > 0.50', 0.2216), ('b > 0.50', 0.2170) ] `

the question is: what is this supposed to mean?

**Contents**hide

#### Best Answer

In the code implementaion repo you linked there's an example code which claims that explains, see cell 8

Note that LIME has discretized the features in the explanation. This is because we let discretize_continuous=True in the constructor (this is the default).

If you modify the constructor to

`explainer = lime.lime_tabular.LimeTabularExplainer(x, feature_names=['a', 'b'],discretize_continuous=False) `

then the output is something like

`[('b', 0.11223168027269199), ('a', 0.11110292313683988)] `

I understood that both instances – discretized or not – are supposed to approximate a linear model in the vicinity of the feature example [1,1].

### Similar Posts:

- Solved – How to extract global feature importances of a black box model from local explanations with LIME
- Solved – Using Lime on a binary classification neural network
- Solved – How to interpret probabilities together with output from R lime package
- Solved – H2o interpretability – LIME
- Solved – Why LIME does not show prediction probability for the other class