Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/feature buildout #4

Merged
merged 5 commits into from
Jul 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 6 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,18 @@
# Summary

# TODOs
- [ ] Figure out the tranpose non-sense or support both
- [ ] Clean up `apply_batch` we shouldn't need to pass a coeff
- [ ] Add docstrings
- [X] Pauli
- [X] PauliString
- [ ] PauliOp
- [ ] SummedPauliOp
- [ ] Figure out the tranpose non-sense or support both (some functions take the transpose of the states and others don't)
- [ ] Clean up `apply_batch` we shouldn't need to pass a coeff
- [ ] Clean up tests
- [X] Clean up test utils
- [X] Add type aliases and factory functions to utils for fast_pauli
- [X] Seach the names and make sure we don't have any overlap with other projects
- [ ] Build out pauli decomposer
- [X] Remove the weights argument and rename to data
- [X] Add namespace
- [ ] Add apply method to SummedPauliOp that takes precomputed weighted data
- [ ] Writeup for docs
- [ ] Add pybind11 interface and python examples
- [ ] Change functions names over to default to parallel impl and use `_serial` for the serial implementation
- [ ] Change functions that may run in parallel to take [`std::execution_policy`](https://en.cppreference.com/w/cpp/algorithm/execution_policy_tag_t)
- [ ] Possibly add levels to methods like BLAS to group methods by scaling
- [ ] Migrate `PauliOp` and `SummedPauliOp` to only store mdspans rather than copies of the data itself

## Requirements
Expand All @@ -32,7 +26,7 @@
## Build and Test

```bash
cmake -B build -DCMAKE_CXX_COMPILER=<your_favorite_c++_compiler>
cmake -B build -DCMAKE_CXX_COMPILER=clang++
cmake --build build
ctest --test-dir build
```
Expand All @@ -42,3 +36,4 @@ ctest --test-dir build
The C++ portion of this library relies heavily on spans and views.
These lightweight accessors are helpful and performant, but can lead to dangling spans or accessing bad memory if used improperly.
Developers should familiarize themselves with these dangers by reviewing [this post](https://hackingcpp.com/cpp/std/span.html).

30 changes: 30 additions & 0 deletions docs/planning.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# General


## Notation

- Pauli Matrix $\sigma_i \in \{ I,X,Y,Z \}$
- Pauli String $\mathcal{\hat{P}} = \bigotimes_i \sigma_i$
- State vector ${\ket{\psi}}$ and a set of ${n}$ state vectors $\ket{\psi_t}$ represented as columns in matrix
- Sum of weighted Pauli strings (currently called `PauliOp`) $A_k = \sum_i h_i \mathcal{\hat{P_i}}$
- Sum of summed weighted Pauli strings (currently called `SummedPauliOp`) $B = \sum_k \sum_i h_{ik} \mathcal{\hat{P_i}}$


# List of Operations

Here's a terse list of the type of operations we want to support in `fast_pauli` (this list will grow over time):

1. Pauli String to sparse matrix (Pauli Composer)
2. $\mathcal{\hat{P}} \ket{\psi}$
3. $\mathcal{\hat{P}} \ket{\psi_t}$
4. $\bra{\psi_t} \mathcal{\hat{P_i}} \ket{\psi_t}$
5. $\bra{\psi_t} \mathcal{x_{ti}\hat{P_i}} \ket{\psi_t}$
6. $\big( \sum_i h_i \mathcal{\hat{P}}_i \big) \ket{\psi_t}$
7. $\big(\sum_k \sum_i h_{ik} \mathcal{\hat{P}}_i \big) \ket{\psi_t}$
8. $\big(\sum_k x_{tk} \sum_i h_{ik} \mathcal{\hat{P}}_i \big) \ket{\psi_t}$
9. $\big(\sum_k ( \sum_i h_{ik} \mathcal{\hat{P}}_i )^2 \big) \ket{\psi_t}$
10. $\bra{\psi_t} \{ \mathcal{\hat{P_i}}, \hat{A_k} \} \ket{\psi_t}$
11. $\bra{\psi_t} ( \sum_i h_{ik} \mathcal{\hat{P}}_i ) \ket{\psi_t}$
12. $\bra{\psi_t} \big(\sum_k x_{tk} \sum_i h_{ik} \mathcal{\hat{P}}_i \big) \ket{\psi_t}$
13. $\bra{\psi_t} ( \sum_i h_{ik} \mathcal{\hat{P}}_i )^2 \ket{\psi_t}$
14. $\bra{\psi_t} \big(\sum_k ( \sum_i h_{ik} \mathcal{\hat{P}}_i )^2 \big) \ket{\psi_t}$
6 changes: 4 additions & 2 deletions include/__factory.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -86,11 +86,13 @@ auto rand(std::vector<T> &blob, std::array<size_t, n_dim> extents) {
if constexpr (is_complex<T>::value) {
std::uniform_real_distribution<typename T::value_type> dis(0, 1.0);

std::ranges::generate(blob, [&]() { return T{dis(gen), dis(gen)}; });
std::generate(blob.begin(), blob.end(), [&]() {
return T{dis(gen), dis(gen)};
});
jamesETsmith marked this conversation as resolved.
Show resolved Hide resolved
} else {
std::uniform_real_distribution<T> dis(0, 1.0);

std::ranges::generate(blob, [&]() { return T{dis(gen)}; });
std::generate(blob.begin(), blob.end(), [&]() { return T{dis(gen)}; });
}

return std::mdspan<T, std::dextents<size_t, n_dim>>(blob.data(), extents);
Expand Down
33 changes: 33 additions & 0 deletions src/fast_pauli.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
#include "fast_pauli.hpp"

#include <experimental/mdspan>
#include <pybind11/numpy.h>
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>

namespace py = pybind11;

void scale_tensor_3d(py::array_t<double> array, double scale) {
auto arr = array.mutable_unchecked<>();
std::mdspan tensor(arr.mutable_data(), arr.shape(0), arr.shape(1),
arr.shape(2));

#pragma omp parallel for collapse(3)
for (size_t i = 0; i < tensor.extent(0); i++) {
for (size_t j = 0; j < tensor.extent(1); j++) {
for (size_t k = 0; k < tensor.extent(2); k++) {
tensor(i, j, k) *= scale;
}
}
}
}

PYBIND11_MODULE(py_fast_pauli, m) {
m.doc() = "Example NumPy/C++ Interface Using std::mdspan"; // optional module
// docstring
m.def("scale_tensor_3d", &scale_tensor_3d, "Scale a 3D tensor by a scalar.",
py::arg().noconvert(), py::arg("scale"));

py::class_<fast_pauli::SummedPauliOp<double>>(m, "SummedPauliOp")
.def(py::init<>());
}
Loading