You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I started using Talos several weeks ago and really appreciate how easy and helpful it is. However, I came across few ideas that might be interesting (and easy) to implement
As far as I know, the best model would be the one that generalizes best. Meaning the one that has the lowest error from the Evaluate() class, rather than the one with the lowest val_loss. I noticed that the Deploy() package can only save the one from the second case, so i added a model id as input to rather use the best model from Evaluate() if available (if not, take the one from best_model). Now I'm not an expert on machine learning and I'd gladly follow your advice on that.
I had a problem using the hidden_layers function from model/layer.py with the functional API of keras at first because it was not intended this way. A simple fix that I did was to do something like this :
for i in range(self.hidden_layers):
layer = Dense(self.layer_neurons[i],
activation=self.activation,
use_bias=self.use_bias,
kernel_initializer=self.kernel_initializer,
kernel_regularizer=self.kernel_regularizer,
bias_initializer=self.bias_initializer,
bias_regularizer=self.bias_regularizer,
activity_regularizer=self.activity_regularizer,
kernel_constraint=self.kernel_constraint,
bias_constraint=self.bias_constraint)(layer)
layer = Dropout(self.dropout)(layer)
return layer
Since all the parameters are the same between the sequential and functional API models, I ended up with a class for hidden_layers and two functions for both cases. I don't know if this actually something that came up in a later release but it's an easy fix.
Related to [FEATURE REQUEST] Interrupted scan to resume where it left off #97. Maybe it would be easier to have a recoverable mode that compares the parameters grid from the parameters that were already tested in the .csv file. This feature would be nice especially when running on a cluster and the job exceeds the running time. It might also be used to launch several instances of the same experiment on different nodes, by checking whether the current configuration has already been computed or not (probably not optimal though, given the reduction techniques in the scan).
Maybe some verbosity about the reduction techniques in the scan (few prints saying that x parameter configurations have been removed). I might not be using it correctly but I never know if it has actually been used or not ...
I hope I am not wasting your time on this, but Talos is something I really look forward to use extensively in the future and I would be happy to bring in some improvements.
The text was updated successfully, but these errors were encountered:
Thanks a lot for the elaborate feedback :) And sorry I totally forgot to respond to you. Regarding 1, this is now handled in two ways (and will be available in 0.4.6) where you have several new related attributes in the Scan object, and also there is a new command Autom8() that takes you from start of Scan() all the way to predictions and best model.
I appreciate the fact that the hidden_layers approach could be more intuitive, I will create a new issue for this for discussion. This is in #157.
For 3, thanks for referencing it. This is a rather low priority item at this point, but probably we'll get to it later.
This is clearly a must-have, as you suggested, the user is now totally in the dark. Creating a new issue. This is in #158.
Hi all,
I started using Talos several weeks ago and really appreciate how easy and helpful it is. However, I came across few ideas that might be interesting (and easy) to implement
As far as I know, the best model would be the one that generalizes best. Meaning the one that has the lowest error from the Evaluate() class, rather than the one with the lowest val_loss. I noticed that the Deploy() package can only save the one from the second case, so i added a model id as input to rather use the best model from Evaluate() if available (if not, take the one from best_model). Now I'm not an expert on machine learning and I'd gladly follow your advice on that.
I had a problem using the hidden_layers function from model/layer.py with the functional API of keras at first because it was not intended this way. A simple fix that I did was to do something like this :
Since all the parameters are the same between the sequential and functional API models, I ended up with a class for hidden_layers and two functions for both cases. I don't know if this actually something that came up in a later release but it's an easy fix.
Related to [FEATURE REQUEST] Interrupted scan to resume where it left off #97. Maybe it would be easier to have a recoverable mode that compares the parameters grid from the parameters that were already tested in the .csv file. This feature would be nice especially when running on a cluster and the job exceeds the running time. It might also be used to launch several instances of the same experiment on different nodes, by checking whether the current configuration has already been computed or not (probably not optimal though, given the reduction techniques in the scan).
Maybe some verbosity about the reduction techniques in the scan (few prints saying that x parameter configurations have been removed). I might not be using it correctly but I never know if it has actually been used or not ...
I hope I am not wasting your time on this, but Talos is something I really look forward to use extensively in the future and I would be happy to bring in some improvements.
The text was updated successfully, but these errors were encountered: