Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve QSVR unit tests #434

Open
adekusar-drl opened this issue Jul 7, 2022 · 18 comments
Open

Improve QSVR unit tests #434

adekusar-drl opened this issue Jul 7, 2022 · 18 comments
Labels
good first issue Good for newcomers type: enhancement ✨ Features or aspects to improve
Milestone

Comments

@adekusar-drl
Copy link
Collaborator

What is the expected enhancement?

Currently unit test in TestQSVR are based on the classification dataset, e.g., labels are [0, 0, 1, 1]. Such a dataset is not very well suited to test regressors. This issue intends to fix this problem and implement unit tests on a more suitable dataset.

@adekusar-drl adekusar-drl added good first issue Good for newcomers priority: low type: enhancement ✨ Features or aspects to improve labels Jul 7, 2022
@bopardikarsoham
Copy link

bopardikarsoham commented Jul 11, 2022

Can we use these datasets in the TestQSVR unit test to fix this issue?

@adekusar-drl
Copy link
Collaborator Author

These are too large for unit tests. Most likely an artificial one, say, size of <10 samples should be good. Or maybe even smaller, right now there 4 training samples and 2 test.

@AbdullahKazi500
Copy link

Hi @adekusar-drl I am working on this issue at the moment

@adekusar-drl
Copy link
Collaborator Author

@AbdullahKazi500 do you have ideas what exactly to be fixed in this issue?

@AbdullahKazi500
Copy link

Can you assign me the issue

@adekusar-drl
Copy link
Collaborator Author

Sure, I can. But the question I posted above is still valid.

@AbdullahKazi500
Copy link

first Choosing an appropriate regression dataset that is small enough to be used for unit tests, but still representative of the regression problem.

then Create a new unit test module or modify the existing TestQSVR module to include the new test cases. This module should import the necessary modules for dataset loading and splitting, and the QSVR implementation to be tested.
Load the regression dataset and split it into training and testing subsets. Create a QSVR estimator object with default hyperparameters. Fit the QSVR estimator on the training data and make predictions on the test data.
Calculate the performance metrics such as R^2 score, mean squared error, mean absolute error, etc.
Compare the performance metrics with the expected values and check if they are within acceptable thresholds.
Repeat steps for different hyperparameters to check if the implementation is sensitive to hyperparameters.

Run the unit tests and make sure that they all pass.

@adekusar-drl
Copy link
Collaborator Author

Thanks for getting back. A few comments:

  • A dataset should be really small, like a couple of data points. Currently, there are 4 of them in the test. One of the ideas to try out is to use the regression dataset from https://github.com/Qiskit/qiskit-machine-learning/blob/main/docs/tutorials/02_neural_network_classifier_and_regressor.ipynb. But you are welcome to propose something else.
  • I suggest to modify existing test case unless it is inconvenient to update them.
  • There's no need to split the dataset, these are just tests.
  • One metric is enough. Currently, tests compare evaluated score vs true value.
  • Not sure hyperparameterization is required here.

So, the work should be pretty straight forward. Let me know if you have questions.

@AbdullahKazi500
Copy link

I will keep you updated

@AbdullahKazi500
Copy link

Hey @adekusar-drl Regarding the size of the dataset, you mentioned that a small dataset with a couple of data points should be sufficient for the test. As for the metrics, you suggested that one metric is enough, and currently, the test compares the evaluated score with the true value.

In terms of hyperparameterization, are you sure if it is necessary for this specific test case. It may depend on the nature of the test and the algorithm being tested. However, if hyperparameterization is not required, then it is not necessary to include it in the test. ? Can you clarify this

@adekusar-drl
Copy link
Collaborator Author

In general, there's no need for hyperparameterization. Just to be sure we are on the same page, what hyper parameters are you considering?

@AbdullahKazi500
Copy link

I Guess it wont be necessary right ? For the Variational Quantum Classifier (VQC), hyperparameters include the number of layers in the quantum circuit, the depth of each layer, the type of parameterized gates used in each layer, and the number of training iterations. For the Quantum Kernel Estimator (QKE), hyperparameters include the type of quantum kernel used, the regularization strength, and the number of training iterations.
For the Quantum Support Vector Machine (QSVM), hyperparameters include the type of kernel used, the regularization strength, and the number of training iterations. @adekusar-drl

@adekusar-drl
Copy link
Collaborator Author

No, all of them should fixed to certain values and that's it. Tests must check that algorithms work and results are meaningful, but no more.

@adekusar-drl
Copy link
Collaborator Author

@AbdullahKazi500 is there any progress here?

@AbdullahKazi500
Copy link

Hi @adekusar-drl I am Comparing the performance metrics with the expected values and check if they are within acceptable thresholds. and then I Run the unit tests and make sure that they all pass.

@edoaltamura
Copy link
Collaborator

Hi @AbdullahKazi500 has there been any recent progress? If so, please organize and submit the commits in a PR.

@AbdullahKazi500
Copy link

yes I will be submitting a PR @edoaltamura

@edoaltamura
Copy link
Collaborator

edoaltamura commented Oct 18, 2024

I haven't heard recent updates on this test update and wanted to touch base and share a few timeline considerations FYI. We aim to include this update in the v.0.8 initial release, which is expected around the end of October 2024. This means that we would like to support long-standing community contributions - like this one - pull-requested by Friday 25 October, and leave enough time to coordinate the merging and release preparation.

@AbdullahKazi500 Are you still available to work on this test with the suggestions in #434 (comment) by then or rather step out on this occasion?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers type: enhancement ✨ Features or aspects to improve
Projects
None yet
Development

No branches or pull requests

4 participants