You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the latest release of TensorFlow Ranking v0.3.1, we introduce a canned version of Neural RankGAM (arXiv), an interpretable learning-to-rank model based on generalized additive models.
What is a Neural RankGAM?
Neural RankGAM is an adaptation of generalized additive models (GAMs). We use a similar model structure in TensorFlow Ranking so that the model can be trained with ranking losses for learning-to-rank problems.
A Neural RankGAM scores each item individually. When there are no context features available, it constructs a submodel for each individual feature respectively, and takes the sum of their outputs as the ranking score. For an item x represented by n features (x_1, x_2, ..., x_n), its ranking score can be calculated by:
F(x) = f_1(x_1) + f_2(x_2) + ... + f_n(x_n)
In our implementation, each submodel is instantiated by a (small) feed-forward neural network.
When there are m context features (c_1, c_2, ..., c_m), the ranking score will be determined by:
where (w_1(c), w_2(c), ..., w_n(c)) is a weighting vector determined solely by context features. For each context feature c_j, a feed-forward submodel is constructed to derive a weighting vector (w_j1(c_j), w_j2(c_j), ..., w_jn(c_j)). The final weighting vector is the sum of the output of all the context features' submodels.
For more detailed and formalized description, please refer to the Neural RankGAM paper.
Why Neural RankGAM?
We propose Neural RankGAM as an interpretable ranking model. With each submodel only taking one feature as the input, you can essentially visualize each submodel f_i by plotting its output f_i(x_i) with respect to the corresponding feature x_i. We believe that this can greatly help you on understanding each feature's contribution and diagnosing potential bugs.
Notice that due to the additive structure of Neural RankGAM, it does not leverage higher-order feature interaction and could inevitably perform worse than other "black-box" neural models such as a fully-connected feed-forward network. You can use Neural RankGAM on applications requiring highly transparent models or as a preliminary step to gain insights about the data sets.
In the latest release of TensorFlow Ranking v0.3.1, we introduce a canned version of Neural RankGAM (arXiv), an interpretable learning-to-rank model based on generalized additive models.
What is a Neural RankGAM?
Neural RankGAM is an adaptation of generalized additive models (GAMs). We use a similar model structure in TensorFlow Ranking so that the model can be trained with ranking losses for learning-to-rank problems.
A Neural RankGAM scores each item individually. When there are no context features available, it constructs a submodel for each individual feature respectively, and takes the sum of their outputs as the ranking score. For an item x represented by n features (x_1, x_2, ..., x_n), its ranking score can be calculated by:
F(x) = f_1(x_1) + f_2(x_2) + ... + f_n(x_n)
In our implementation, each submodel is instantiated by a (small) feed-forward neural network.
When there are m context features (c_1, c_2, ..., c_m), the ranking score will be determined by:
F(c, x) = w_1(c) * f_1(x_1) + w_2(c) * f_2(x_2) + ... + w_n(c) * f_n(x_n)
where (w_1(c), w_2(c), ..., w_n(c)) is a weighting vector determined solely by context features. For each context feature c_j, a feed-forward submodel is constructed to derive a weighting vector (w_j1(c_j), w_j2(c_j), ..., w_jn(c_j)). The final weighting vector is the sum of the output of all the context features' submodels.
For more detailed and formalized description, please refer to the Neural RankGAM paper.
Why Neural RankGAM?
We propose Neural RankGAM as an interpretable ranking model. With each submodel only taking one feature as the input, you can essentially visualize each submodel f_i by plotting its output f_i(x_i) with respect to the corresponding feature x_i. We believe that this can greatly help you on understanding each feature's contribution and diagnosing potential bugs.
Notice that due to the additive structure of Neural RankGAM, it does not leverage higher-order feature interaction and could inevitably perform worse than other "black-box" neural models such as a fully-connected feed-forward network. You can use Neural RankGAM on applications requiring highly transparent models or as a preliminary step to gain insights about the data sets.
How to use Neural RankGAM?
We provide a canned estimator and you can find the example code to train the model.
For keras users, we also provide a canned Keras RankingNetwork and the example code.
The text was updated successfully, but these errors were encountered: