-
Notifications
You must be signed in to change notification settings - Fork 328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Error while using VQC #837
Comments
Hi @Ilyaant, thanks for your question. Could you please share the part of your code that triggers the allocation? |
Hello @edoaltamura, thank you very much for your answer. The code snippet that I run is: num_features = X_train.shape[1]
feature_map = ZZFeatureMap(feature_dimension=num_features, reps=1)
ansatz = RealAmplitudes(num_qubits=num_features, reps=3)
optimizer = COBYLA(maxiter=100)
vqc = VQC(
feature_map=feature_map,
ansatz=ansatz,
optimizer=optimizer
)
vqc.fit(X_train, y_train.to_numpy()) The The full description of the exception:
|
@edoaltamura For now, it seems to me that the issue is inside the |
Hi @Ilyaant , so I am not 100% sure why this is going wrong but it is worth noting that log2(1048576) = 20 exactly, so i am assuming that something is going wrong with your array size of 20. Try checking the shape of the arrays heading into the network / what happens to them when you run .to_numpy(). Otherwise, it is challenging for us to investigate this bug without access to the full script. I'd also try running VQC.score() with some of your data before fitting. This is essentially checking if the problem is VQC or the optimiser COBYLA. |
Hello! While fitting VQC on a (3599, 20) training dataset I faced the following exception:
MemoryError: Unable to allocate 1.64 GiB for an array with shape (1048576,) and data type <U420
Could you please explain the error to me? I'm working on a machine with 120 GB of RAM and don't understand how the program cannot allocate 1.64 GB. Is there a way solve the problem?
The idea I have is to divide the dataset into smaller parts and fit the VQC on them. But here's another question: can I call the fit() method multiple times on different data chunks, or will the model be training from scratch each time?
Still, I don't like this idea, I'd want to do something with the memory usage limitation. Thank you!
The text was updated successfully, but these errors were encountered: