-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using call_get_leaves inside @tf.function call in ensemble model inherits from tensorflow.keras.Model #199
Comments
Updated the colab link with sharable notebook |
Hi, thank you for updating the link to the Colab, I will have a look! |
It seems to be something about the shapes of the output to ...
#TODO: want to change this to leaves predictions
tfdf_output = self.tfdf_model.call_get_leaves(inputs)
tfdf_output = tf.cast(tfdf_output, tf.float32)
tfdf_output = tf.reshape(tfdf_output, [tf.shape(inputs)[0], 5])
tfdf_output = tf.stop_gradient(tfdf_output)
... But you have to know in advance the number of trees (5) and hardcode that into the model. |
Btw, I simply converted the leaf numbers to float values, but if I were to combine the models, I'd definitely either embed the leaf numbers (different embedding per tree) or just add an extra NN (Dense) layer on top (which is equivalent). |
Hi, thank you for your solution! Environment details: (followed the compatibility table here) Got this error on the call_get_leaves (regular prediction worked just fine): File "/usr/local/lib/python3.10/site-packages/tensorflow_decision_forests/keras/core_inference.py", line 767, in call_get_leaves * assert len(self._models_get_leaves) == 1 TypeError: object of type 'NoneType' has no len() This was reproduced also in the colab notebook using the mentioned versions. |
hi @advahadr , I'm sorry you are having these difficulties -- the I have 2 hypothesis:
One short-term alternative, that would also work for (1) above: generate the leave values first, as a separate step. And then concatenate the leaf values to the inputs for the Keras model. This is not convenient :(, but it will work, if your environment allows this intermediary step. You could even materialized (save to disk along the input) the leaf values after training the TFDF model. We'll look into this (most likely tomorrow, there is conference going on today), and get back to you. If you could provide more details on how you are using it in your environment (Sagemaker), it would be very helpful! Is your pipeline something that reads the model and then runs inference on it, in Python ? Or is it using the TensorFlow C++ API ? etc. |
Hi @janpfeifer, Regarding 2: Regarding 1:
Later on the funnel we also predicting on this ensemble model. Regarding saved model problem: Regarding the alternative: Regarding SM environment: Hope it's a bit more clear, if not I can elaborated more. Regards, |
Hi Adva, Sorry to hear about your troubles. Let me also try to help :). Regarding the shape, the output of def __init__(self, tfdf_model):
...
self.tfdf_model = tfdf_model
self.num_trees = tfdf_model.make_inspector().num_trees()
@tf.function
def call(self, inputs):
...
tfdf_output_leaves = self.tfdf_model.call_get_leaves(inputs)
tfdf_output_leaves_casted = tf.cast(tfdf_output_leaves, tf.float32)
tfdf_output_leaves_casted.set_shape((None, self.num_trees))
concatenated = self.concat_nn_tfdf([x, tfdf_output_leaves_casted])
About saving your model. Saving a model (e.g. I copied and updated your notebook with those changes. You can find it here: https://colab.research.google.com/drive/1TIPdzDN0UDLAXtcVICmsdh9YEDhW12LO?usp=sharing Cheers, |
Hi @rstz thank you for your great help! I'm getting there but still have some issues: I tried to use the code you provided on our repo over Sagemaker:
The prints log show: 2023-11-20T18:23:15.784+02:00 | tfdf_output_leaves: Tensor("StatefulPartitionedCall:0", shape=(2048, None), dtype=int32) | 2023-11-20T18:23:15.784+02:00 | tfdf_output_leaves_casted: Tensor("Cast:0", shape=(2048, 3), dtype=float32) And the error I got (Incompatible shapes: [80,1] vs. [2048,1]): ErrorMessage "tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error Detected at node 'gradient_tape/binary_crossentropy/mul_1/Mul' defined at (most recent call last) One thing to note: This is non blocker issue for me but when calling: Would appreciate your help! thank you, Adva |
Hi, |
Hi All,
I would like to get you help on the following Ensemble architecture:
created this colab notebook for your convenience.
I'm using the output of pre-trained tfdf model and concat it to a dense layer output, when I call the tfdf model directly I can concatenate is to the output of the dense layer [please see class MyEnsembleWorking], however my problem is when trying to concat the index of the leaves instead, by using:call_get_leaves [please see class MyEnsembleLeaves].
When adding the line:
tfdf_output_leaves = tf.stop_gradient(self.tfdf_model.call_get_leaves(inputs))
It seems that the output has no shape, I get this print:
tfdf_output_leaves: Tensor("StopGradient_1:0", shape=(None, None), dtype=int32)
And can't work further with this output and concatenate it.
I wonder what it the correct way to ensemble the leaves prediction and not the probability in my architecture.
Would appreciate any help,
Regards
Adva
The text was updated successfully, but these errors were encountered: