Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor QuantumNetwork to support general quantum networks #1244

Merged
merged 84 commits into from
Jul 20, 2024

Conversation

Canoming
Copy link
Contributor

@Canoming Canoming commented Mar 2, 2024

Checklist:

  • Reviewers confirm new code works as expected.
  • Tests are passing.
  • Coverage does not decrease.
  • Documentation is updated.

I've refactored the QuantumNetworks to support general quantum networks and make it more compatible with usual tensor contraction methods.

To make things clear, I made a class diagram for the future development of the module

class diagram

@Canoming Canoming added the quantum_info module PRs and issues related to the quantum information module label Mar 2, 2024
@Canoming Canoming self-assigned this Mar 2, 2024
@renatomello renatomello marked this pull request as draft March 6, 2024 16:02
@scarrazza scarrazza added this to the Qibo 0.2.8 milestone Mar 26, 2024
@Canoming Canoming marked this pull request as ready for review April 17, 2024 07:45
@Canoming Canoming requested a review from renatomello April 18, 2024 12:02
@Canoming
Copy link
Contributor Author

The build of the wheel fails after I merge the master into this branch. Not sure what causes the problem

@renatomello
Copy link
Contributor

@Canoming can you merge master to this branch and fix the conflicts? Also, can you mark which of the suggested changes were already implemented? Thanks.

Hi, in my last commit, all suggested changes are implemented. The only left over issue is the error class in tensorflow cannot be captured

There are the new conflicts with master to fix still

Comment on lines 84 to 92
try:
tensor = np.transpose(operator.reshape(list(partition) * 2), order)
return tensor.reshape([dim**2 for dim in partition])
except:
raise_error(
ValueError,
"``partition`` does not match the shape of the input matrix. "
+ f"Cannot reshape matrix of size {operator.shape} to partition {partition}",
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like it's best to check if partition and operator.shape match before trying the calculation. Raise an error if it doesn't and only do the calculation if it does. That way you avoid using try and except completely.

from functools import reduce
from logging import warning
from numbers import Number
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Canoming This question still remains: we don't need this Number class for typing, do we?

src/qibo/quantum_info/quantum_networks.py Show resolved Hide resolved
src/qibo/quantum_info/quantum_networks.py Outdated Show resolved Hide resolved
src/qibo/quantum_info/quantum_networks.py Outdated Show resolved Hide resolved
src/qibo/quantum_info/quantum_networks.py Outdated Show resolved Hide resolved
Comment on lines 95 to 115
try:
tensor = np.transpose(operator.reshape(list(partition) * 2), order)
return tensor.reshape([dim**2 for dim in partition])
except ValueError:
raise_error(
ValueError,
"``partition`` does not match the shape of the input matrix. "
+ f"Cannot reshape matrix of size {operator.shape} to partition {partition}",
)
except InvalidArgumentError:
raise_error(
ValueError,
"``partition`` does not match the shape of the input matrix. "
+ f"Cannot reshape matrix of size {operator.shape} to partition {partition}",
)
except RuntimeError:
raise_error(
ValueError,
"``partition`` does not match the shape of the input matrix. "
+ f"Cannot reshape matrix of size {operator.shape} to partition {partition}",
)
Copy link
Contributor

@renatomello renatomello Jul 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a much simpler way of implementing this. Just check if partition matches operator.shape. If not, raise your custom error. If it does, the calculation is done. None of these try / except are needed. I had already mentioned this in the previous review round.

Comment on lines 123 to 124
Returns:
bool: Hermiticity condition.
:class:`qibo.quantum_info.quantum_networks.QuantumNetwork`:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Canoming also not resolved

@Canoming
Copy link
Contributor Author

For the Number type mentioned in #1244 (comment), we include it here as some backends implement non-standard classes for float numbers. It may raise a type error if a quantum_network is multiplied by those float numbers.

@renatomello
Copy link
Contributor

For the Number type mentioned in #1244 (comment), we include it here as some backends implement non-standard classes for float numbers. It may raise a type error if a quantum_network is multiplied by those float numbers.

I see. Which backend exactly? Do you have an example code?

@Canoming
Copy link
Contributor Author

For the Number type mentioned in #1244 (comment), we include it here as some backends implement non-standard classes for float numbers. It may raise a type error if a quantum_network is multiplied by those float numbers.

I see. Which backend exactly? Do you have an example code?

I encountered the issue before, but I don't remember the exact scenario. It was Pytorch that caused the problem, where the output of some of its functions will inherit the data type of the torch.Tensor, and return a number of type, such as torch.float32, instead of float.

@renatomello
Copy link
Contributor

I encountered the issue before, but I don't remember the exact scenario. It was Pytorch that caused the problem, where the output of some of its functions will inherit the data type of the torch.Tensor, and return a number of type, such as torch.float32, instead of float.

That's fine. That happens everywhere but we don't explicitly check if it's a float or a np.float64 or a torch.float64, etc. Just type it as float and the rest is implicit. No need to use the Numberclass.

@renatomello renatomello added this pull request to the merge queue Jul 20, 2024
Merged via the queue into master with commit c598c67 Jul 20, 2024
25 checks passed
@renatomello renatomello deleted the quantum_network branch July 20, 2024 11:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request quantum_info module PRs and issues related to the quantum information module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants