-
Notifications
You must be signed in to change notification settings - Fork 327
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NeuralNetworkClassifier Accuracy Updates #813
Comments
Hi Kieran, thank you for posting on qiskit-machine-learning! To summarise the Slack thread, could you please describe again what makes the A similar |
Hi, it would be great to see training and test accuracies in the callback. The callback feature suggested for PegasosQSVC in #599 was for viewing the objective function value. |
Hi @kezmcd1903, so I think this is all already possible by utilising the The GoalI am essentially going to give you a code implementation to have "The ability to save the training and test accuracy progress while training a QNN". I saw you were struggling trying to implement parameter save and loads due to the optimiser re-initialising, I don't have a good solution for that but I am confident we can sort this with the The Callback functionYou're right to use the
Other than that, it can do whatever the user wants and is simply limited by the user's creativity. This includes storing the object weights and functions, performing tests, printing or storing test results, creating graphs or anything else. Example - Running test suites at the end of each epoch.Here's a little pseudocode for what I would write, though I can think of many other ways to do it, so if you want something else, let me know and we can workshop something more specific. The simplest method would be to simply store the weights across all training and then retrospectively run your testing. One thing to note before we get into this is that if we want to avoid, saving the model and then warm starting, you might need to do some funky stuff to form epochs. I would suggest padding your dataset to include multiple sets of data, as in having one huge array containing multiple copies of your data essentially forming your epochs. You would however need to specify the epoch length (
Or better yet, save those results to a file, to not pollute the global space. If you avidly want the testing suite done dynamically during training you just need a way of checking whether you've finished an epoch.
Please let me know if you have any questions, I appreciate that this feels a little like a botch job but I wanted to show that the functionality is already there. However, saving optimizer 'memory' could be worth looking into to avoid the pitfalls you described seeing when warm starting with an initial point. P.S. I saw you were struggling to use RawFeatureVector in the Slack channel, I am too, it is very frustrating. Keep in mind though RawFeatureVector doesn't work with gradient-based methods. |
@kezmcd1903 did the |
What should we add?
The ability to save the training and test accuracy progress while training a QNN (with NeuralNetworkClassifier) per epoch / iteration. This tells us a lot more information than viewing the loss and is important to display in any QML paper.
I thought a nice way to do this would be to follow https://qiskit-community.github.io/qiskit-machine-learning/tutorials/09_saving_and_loading_models.html and break the training up in to epochs, then test and save the model at regular intervals.
I think there's quite a big issue here that the tutorial fails to mention - the fact that each time you save and load your model your optimizer 'memory' resets (I'm using COBYLA). That means that if your objective function landscape is difficult to navigate you get repeated behaviour.
See my slack post for more details https://qiskit.slack.com/archives/C7SJ0PJ5A/p1720017452449239.
The text was updated successfully, but these errors were encountered: