You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At present, for each instance, the sum of the Shapley values will not exactly equal the model prediction. Assuming that the reference population is the right population, in the limit of Monte Carlo samples, the sum of the feature-level Shapley values will equal that instance's prediction. However, due to the stochastic nature of the sampling, this may not be the case in practice.
I'll add an optional argument to shap() that will adjust the Shapley values to support the additivity constraint at the instance level.
The text was updated successfully, but these errors were encountered:
Added a reconcile_instance = false argument to shap(). The default is set to false and still needs to be unit tested, but it's looking correct. The adjustments are likely going to be very small for any applied problem (i.e., correcting the Shapley sum to get a percent or two closer to the model predictions).
The instance-level adjustment is not exact. There is still some tolerance between the sum of the adjusted Shapley values + model intercept and the straight up model predictions. I need to explore this more with some applied problems.
At present, for each instance, the sum of the Shapley values will not exactly equal the model prediction. Assuming that the reference population is the right population, in the limit of Monte Carlo samples, the sum of the feature-level Shapley values will equal that instance's prediction. However, due to the stochastic nature of the sampling, this may not be the case in practice.
I'll add an optional argument to
shap()
that will adjust the Shapley values to support the additivity constraint at the instance level.The text was updated successfully, but these errors were encountered: