You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I needed just this functionality for a university project and implemented it here: #26 .
Additionally, to cope with larger data sets, I implemented the kmeans function of the SHAP Python lib to help summarize data instances.
Hi there,
is there any elaborated way to obtain SHAP Feature Importance using shapper?
Reading this https://christophm.github.io/interpretable-ml-book/shap.html#shap-feature-importance
...I would guess, doing a loop over "shapper::individual_variable_effect" and mean() the results of attributions per vname could do the trick.
Am I wrong?
Is there any plan to integrate the original functions, like summary_plot to obtain SHAP feature importance?
By the way, when I try to feed the function individual_variable_effect with multiple new observations new_observation = testX[1:5, ] I get errors.
Error in
$<-.data.frame(
tmp, "_attribution_", value = c(0, -0.365675633989662, : replacement has 140 rows, data has 70
The text was updated successfully, but these errors were encountered: