Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Communication inside the MPI simulator may cause overflow error when the message is large #42

Open
HaoTy opened this issue Feb 13, 2024 · 1 comment

Comments

@HaoTy
Copy link
Collaborator

HaoTy commented Feb 13, 2024

This happens, for example, when calling MPI simulator's get_probabilities() which does self._comm.allgather(result.copy_to_host()). If the vector is too large (e.g. 32 qubits), the following is raised:

  File "mpi4py/MPI/Comm.pyx", line 1595, in mpi4py.MPI.Comm.allgather
  File "mpi4py/MPI/msgpickle.pxi", line 862, in mpi4py.MPI.PyMPI_allgather
  File "mpi4py/MPI/msgpickle.pxi", line 147, in mpi4py.MPI.pickle_dump
  File "mpi4py/MPI/msgbuffer.pxi", line 50, in mpi4py.MPI.downcast
OverflowError: integer 34359738509 does not fit in 'int'

which is due to a legacy issue of mpi4py/pickle (see mpi4py/mpi4py#119) that is solved by mpi4py.util.pkl5. It seems the switch to pkl5 can be realized by simply changing the current self._comm = MPI.COMM_WORLD to pkl5.Intracomm(MPI.COMM_WORLD).

@rsln-s
Copy link
Contributor

rsln-s commented Feb 13, 2024

@danlkv, would you be able to take a look at this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants