Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Address executor and observable incompatibility #2514

Open
wants to merge 22 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
a6e09f6
Remove check for observer return type
bdg221 Sep 24, 2024
1a42969
Add error for more than one measurement per qubit
bdg221 Sep 25, 2024
aa2597d
Remove commented line
bdg221 Sep 25, 2024
a5ec9d0
Add test for multiple measurements on a qubit
bdg221 Sep 25, 2024
a08a87f
Update executor and observable documentation
bdg221 Sep 25, 2024
edcf03d
Handle correct case without measurements
bdg221 Sep 26, 2024
4225339
Add back line that broke density matrices
bdg221 Sep 26, 2024
6a6fa13
Remove test for condition that was removed
bdg221 Sep 26, 2024
1d447cf
Remove no longer required test parameters
bdg221 Sep 26, 2024
163b045
executor_observable compatability with typehint and tests
bdg221 Sep 27, 2024
2def914
Add logic to check executor observable compat and tests
bdg221 Sep 27, 2024
7c76316
retry failed attempt with measurement and check returned type
bdg221 Oct 4, 2024
72c0168
add test to compare typed vs nontyped
bdg221 Oct 4, 2024
dc32389
Add numpy float64 to FloatLike
bdg221 Oct 4, 2024
1333ebf
remove uncessary assert
bdg221 Oct 29, 2024
99c7374
update f-string formatting
bdg221 Oct 29, 2024
bbd1298
Add back executor call in float test
bdg221 Oct 30, 2024
02bd3f5
Update finding and checking existing qubit with measurements
bdg221 Oct 30, 2024
3c7b027
Parse results using manual return type
bdg221 Oct 30, 2024
7c4a60f
Check type inside Sequence and Iterators
bdg221 Oct 30, 2024
a0e4cda
Add a second qubit to multi measurement test
bdg221 Oct 30, 2024
8a68dfe
Update Sequence check and add tests
bdg221 Nov 4, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion docs/source/guide/executors.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,9 @@ To instantiate an `Executor`, provide a function which either:
1. Inputs a `mitiq.QPROGRAM` and outputs a `mitiq.QuantumResult`.
2. Inputs a sequence of `mitiq.QPROGRAM`s and outputs a sequence of `mitiq.QuantumResult`s.

**The function must be [annotated](https://peps.python.org/pep-3107/) to tell Mitiq which type of `QuantumResult` it returns. Functions with no annotations are assumed to return `float`s.**
```{warning}
To avoid confusion and invalid results, the executor function must be [annotated](https://peps.python.org/pep-3107/) to tell Mitiq which type of `QuantumResult` it returns. Functions without annotations are assumed to return `float`s.
```

A `QPROGRAM` is "something which a quantum computer inputs" and a `QuantumResult` is "something which a quantum computer outputs." The latter is canonically a bitstring for real quantum hardware, but can be other objects for testing, e.g. a density matrix.

Expand Down
4 changes: 2 additions & 2 deletions docs/source/guide/observables.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,8 +128,8 @@ obs.expectation(circuit, execute=mitiq_cirq.sample_bitstrings)

In error mitigation techniques, you can provide an observable to specify the expectation value to mitigate.

```{admonition} Note:
When specifying an `Observable`, you must ensure that the return type of the executor function is `MeasurementResultLike` or `DensityMatrixLike`.
```{warning}
As note in the [executor documentation](./executors.md#the-input-function), the executor must be annotated with the appropriate type hinting for the return type. Additionally, when specifying an `Observable`, you must ensure that the return type of the executor function is `MeasurementResultLike` or `DensityMatrixLike`.
```

```{code-cell} ipython3
Expand Down
70 changes: 56 additions & 14 deletions mitiq/executor/executor.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,8 @@
FloatLike = [
None, # Untyped executors are assumed to return floats.
float,
np.float32,
np.float64,
Comment on lines +43 to +44
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Were these needed when adding further tests?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found the need to add these when running make test. Here is the result without those in FloatLike:
image
And with those 2 lines, I don't see the errors:
image

@natestemen do you see this behavior too?

Copy link
Collaborator Author

@bdg221 bdg221 Oct 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remembered why these two are in FloatLike. Since we are trying to assume things if a return type is not specified, we check the type of what is returned from self.run(). If we do not include them, then the ifs during Parse the results. will fail, especially if we no longer look at self._executor_return_type as mentioned #2514 (comment)

Iterable[float],
List[float],
Sequence[float],
Expand Down Expand Up @@ -149,6 +151,29 @@ def evaluate(
"Expected observable to be hermitian. Continue with caution."
)

# Check executor and observable compatability with type hinting
# If FloatLike is specified as a return and observable is used
if self._executor_return_type in FloatLike and observable is not None:
if self._executor_return_type is not None:
raise ValueError(
"When using a float like result, measurements should be "
"included manually and an observable should not be "
"used."
)
elif observable is None:
# Type hinted as DensityMatrixLik but no observable is set
if self._executor_return_type in DensityMatrixLike:
raise ValueError(
"When using a density matrix like result, an observable "
"is required."
)
# Type hinted as MeasurementResulteLike but no observable is set
elif self._executor_return_type in MeasurementResultLike:
raise ValueError(
"When using a measurement, or bitstring, like result, an "
"observable is required."
)
Comment on lines +163 to +175
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These checks make it impossible to use an Executor to run and return density matrices and counts. I don't think we want to limit users in that way, do we?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does make it impossible to use an Executor to run and return density matrices and counts if an observable is not provided. This limitation actually comes from parsing the results where observable is used by density matrices _expectation_from_density_matrix() and counts _expectation_from_measurements().


# Get all required circuits to run.
if (
observable is not None
Expand All @@ -160,38 +185,55 @@ def evaluate(
for circuit_with_measurements in observable.measure_in(circuit)
]
result_step = observable.ngroups
elif (
observable is not None
and self._executor_return_type not in MeasurementResultLike
and self._executor_return_type not in DensityMatrixLike
):
raise ValueError(
"""Executor and observable are not compatible. Executors
returning expectation values as float must be used with
observable=None"""
)
else:
all_circuits = circuits
result_step = 1

# Run all required circuits.
all_results = self.run(all_circuits, force_run_all, **kwargs)
try:
all_results = self.run(all_circuits, force_run_all, **kwargs)
except Exception:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Catch specific exceptions when possible. What fails in the above?

Copy link
Collaborator Author

@bdg221 bdg221 Oct 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This exception is coming from the backend which makes it tough to be more specific. This is being used to capture the scenario of a MeasurementResultLike expected return type, but without one being specified. In that situation, an observable should be passed in but we didn't go through the if at line 178 to add the measurements to the circuit(s). Currently, the different backends would throw an error like:

Qiskit: QiskitError: 'No counts for experiment "0"'
Cirq: ValueError: Circuit has no measurements to sample.

The cost of re-running an experiment may not be worth it for this scenario. Of all the times the executors fail, how frequently do we with think this is the scenario and how costly is it to re-run in those other scenarios?

if observable is not None and self._executor_return_type is None:
all_circuits = [
circuit_with_measurements
for circuit in circuits
for circuit_with_measurements in observable.measure_in(
circuit
)
]
all_results = self.run(all_circuits, force_run_all, **kwargs)
else:
raise
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is checking if the observable is not none and trying to see if the result type was meant to be MeasurementResultLike. If the return type was specified then the measurements would have been added to the circuit(s).

Comment on lines 192 to +206
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea of just executing one of the circuits and checking the return type. Does this mean we can move all the compatibility checks in lines 156-175 after this manual checking of return type? This would remove the need of _executor_return_type variable and no typehinting is needed anymore. Maybe I am missing something.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple of things.

  1. This is NOT checking the return type of a single circuit, but instead this is running all circuits with the executor.

  2. The checks in lines 156-175 are specifically checking the following cases based off limitations already in the execute function.

  • FloatLike - expected the circuits to already include measurements and there should NOT be an observable. The executor does not handle observables for FloatLike results.
  • DensityMatrixLike and MeasurementLike - MeasurementResultLike handles observables in lines 182-187 and both respectively use OBSERVABLE._expectation_from_density_matrix() and OBSERVABLE._expectation_from_measurements()

Since these are requirements, I would think it would be better to check and break out in bad scenarios, instead of trying to run the circuits and then error out.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, you are right that it is running all circuits! I guess I was looking at lines 209-216 where you use the manual return type (of the first circuit?) to parse results. I was thinking to just run the first circuit to get the manual return type and do all the checks on observable compatibility afterwards. This is a completely different approach than using _executor_return_type, so understandable if we don't want to that.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main use of _executor_return_type (outside of what I have done in this PR) is to check if it is MeasurementResultLike and there is an observable so that observable.measure_in() can be called to add measurements into the circuit. Without doing this, the backends complain. I can't think of a way to do this without the _executor_return_type since calling run() to try to get back a manual return type throws errors.


# check returned type
manual_return_type = None
if len(all_results) > 0:
manual_return_type = type(all_results[0])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we even care about the _executor_return_type at this point?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If an executor returns something different than what is specified there will be other errors thrown. You are right, at this point we can stick with the manual_return_type.

Copy link
Collaborator Author

@bdg221 bdg221 Oct 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 3c7b027

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just another note on this. The type call will only return the high level object type like list without the element types. FloatLike, MeasurementResultLike, and DensityMatrixLike are expecting List[float] for example since this was previously coming from the _executor_return_type.

I handle this with 7c4a60f by checking for if the result falls under a Sequence or Iterable and if so, I save the first element type to the manual_return_type.

Copy link
Collaborator Author

@bdg221 bdg221 Oct 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I add more unit testing since I just introduced this if branching?

EDIT: Too late. I add test and updated the check to see if the Sequence is empty in 8a68dfe


# Parse the results.
if self._executor_return_type in FloatLike:
if (
self._executor_return_type in FloatLike
and self._executor_return_type is not None
) or manual_return_type in FloatLike:
results = np.real_if_close(
cast(Sequence[float], all_results)
).tolist()

elif self._executor_return_type in DensityMatrixLike:
elif (
self._executor_return_type in DensityMatrixLike
or manual_return_type in DensityMatrixLike
):
observable = cast(Observable, observable)
all_results = cast(List[npt.NDArray[np.complex64]], all_results)
results = [
observable._expectation_from_density_matrix(density_matrix)
for density_matrix in all_results
]

elif self._executor_return_type in MeasurementResultLike:
elif (
self._executor_return_type in MeasurementResultLike
or manual_return_type in MeasurementResultLike
):
observable = cast(Observable, observable)
all_results = cast(List[MeasurementResult], all_results)
results = [
Expand Down
141 changes: 94 additions & 47 deletions mitiq/executor/tests/test_executor.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
import numpy as np
import pyquil
import pytest
from qiskit import QuantumCircuit

from mitiq import MeasurementResult
from mitiq.executor.executor import Executor
Expand All @@ -37,7 +38,7 @@ def executor_batched_unique(circuits) -> List[float]:
return [executor_serial_unique(circuit) for circuit in circuits]


def executor_serial_unique(circuit):
def executor_serial_unique(circuit) -> float:
return float(len(circuit))


Expand All @@ -58,21 +59,29 @@ def executor_pyquil_batched(programs) -> List[float]:


# Serial / batched executors which return measurements.
def executor_measurements(circuit) -> MeasurementResult:
def executor_measurements(circuit):
return sample_bitstrings(circuit, noise_level=(0,))


def executor_measurements_typed(circuit) -> MeasurementResult:
return sample_bitstrings(circuit, noise_level=(0,))


def executor_measurements_batched(circuits) -> List[MeasurementResult]:
return [executor_measurements(circuit) for circuit in circuits]
return [executor_measurements_typed(circuit) for circuit in circuits]


# Serial / batched executors which return density matrices.
def executor_density_matrix(circuit) -> np.ndarray:
def executor_density_matrix(circuit):
return compute_density_matrix(circuit, noise_level=(0,))


def executor_density_matrix_typed(circuit) -> np.ndarray:
return compute_density_matrix(circuit, noise_level=(0,))


def executor_density_matrix_batched(circuits) -> List[np.ndarray]:
return [executor_density_matrix(circuit) for circuit in circuits]
return [executor_density_matrix_typed(circuit) for circuit in circuits]


def test_executor_simple():
Expand All @@ -86,7 +95,7 @@ def test_executor_is_batched_executor():
assert Executor.is_batched_executor(executor_batched)
assert not Executor.is_batched_executor(executor_serial_typed)
assert not Executor.is_batched_executor(executor_serial)
assert not Executor.is_batched_executor(executor_measurements)
assert not Executor.is_batched_executor(executor_measurements_typed)
assert Executor.is_batched_executor(executor_measurements_batched)


Expand All @@ -96,7 +105,7 @@ def test_executor_non_hermitian_observable():
q = cirq.LineQubit(0)
circuits = [cirq.Circuit(cirq.I.on(q)), cirq.Circuit(cirq.X.on(q))]

executor = Executor(executor_measurements)
executor = Executor(executor_measurements_typed)

with pytest.warns(UserWarning, match="hermitian"):
executor.evaluate(circuits, obs)
Expand Down Expand Up @@ -199,53 +208,27 @@ def test_run_executor_preserves_order(s, b):
)
def test_executor_evaluate_float(execute):
q = cirq.LineQubit(0)
circuits = [cirq.Circuit(cirq.X(q)), cirq.Circuit(cirq.H(q), cirq.Z(q))]
circuits = [
cirq.Circuit(cirq.X(q), cirq.M(q)),
cirq.Circuit(cirq.H(q), cirq.Z(q), cirq.M(q)),
]

executor = Executor(execute)

results = executor.evaluate(circuits)
assert np.allclose(results, [1, 2])
assert np.allclose(results, [2, 3])
bdg221 marked this conversation as resolved.
Show resolved Hide resolved

if execute is executor_serial_unique:
assert executor.calls_to_executor == 2
else:
assert executor.calls_to_executor == 1

assert executor.executed_circuits == circuits
assert executor.quantum_results == [1, 2]


@pytest.mark.parametrize(
"execute",
[
executor_batched,
executor_batched_unique,
executor_serial_unique,
executor_serial_typed,
executor_serial,
executor_pyquil_batched,
],
)
@pytest.mark.parametrize(
"obs",
[
PauliString("X"),
PauliString("XZ"),
PauliString("Z"),
],
)
def test_executor_observable_compatibility_check(execute, obs):
q = cirq.LineQubit(0)
circuits = [cirq.Circuit(cirq.X(q)), cirq.Circuit(cirq.H(q), cirq.Z(q))]

executor = Executor(execute)

with pytest.raises(ValueError, match="are not compatible"):
executor.evaluate(circuits, obs)
Comment on lines -237 to -244
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this test need to be removed? I think I'm missing something because I don't quite see why these are not compatible.

Copy link
Collaborator Author

@bdg221 bdg221 Oct 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All of those executors return floats or list[floats]. We do not allow you to have an observables with an executor that is returning FloatLike. However, this was an old test from years ago. This is now covered in def test_executor_float_with_observable_typed().

assert executor.quantum_results == [2, 3]


@pytest.mark.parametrize(
"execute", [executor_measurements, executor_measurements_batched]
"execute", [executor_measurements_typed, executor_measurements_batched]
)
def test_executor_evaluate_measurements(execute):
obs = Observable(PauliString("Z"))
Expand All @@ -258,24 +241,24 @@ def test_executor_evaluate_measurements(execute):
results = executor.evaluate(circuits, obs)
assert np.allclose(results, [1, -1])

if execute is executor_measurements:
if execute is executor_measurements_typed:
assert executor.calls_to_executor == 2
else:
assert executor.calls_to_executor == 1

assert executor.executed_circuits[0] == circuits[0] + cirq.measure(q)
assert executor.executed_circuits[1] == circuits[1] + cirq.measure(q)
assert executor.quantum_results[0] == executor_measurements(
assert executor.quantum_results[0] == executor_measurements_typed(
circuits[0] + cirq.measure(q)
)
assert executor.quantum_results[1] == executor_measurements(
assert executor.quantum_results[1] == executor_measurements_typed(
circuits[1] + cirq.measure(q)
)
assert len(executor.quantum_results) == len(circuits)


@pytest.mark.parametrize(
"execute", [executor_density_matrix, executor_density_matrix_batched]
"execute", [executor_density_matrix_typed, executor_density_matrix_batched]
)
def test_executor_evaluate_density_matrix(execute):
obs = Observable(PauliString("Z"))
Expand All @@ -288,16 +271,80 @@ def test_executor_evaluate_density_matrix(execute):
results = executor.evaluate(circuits, obs)
assert np.allclose(results, [1, -1])

if execute is executor_density_matrix:
if execute is executor_density_matrix_typed:
assert executor.calls_to_executor == 2
else:
assert executor.calls_to_executor == 1

assert executor.executed_circuits == circuits
assert np.allclose(
executor.quantum_results[0], executor_density_matrix(circuits[0])
executor.quantum_results[0], executor_density_matrix_typed(circuits[0])
)
assert np.allclose(
executor.quantum_results[1], executor_density_matrix(circuits[1])
executor.quantum_results[1], executor_density_matrix_typed(circuits[1])
)
assert len(executor.quantum_results) == len(circuits)


def test_executor_float_with_observable_typed():
obs = Observable(PauliString("Z"))
q = cirq.LineQubit(0)
circuit = cirq.Circuit(cirq.X.on(q))
executor = Executor(executor_serial_typed)
with pytest.raises(
ValueError,
match="When using a float like result",
):
executor.evaluate(circuit, obs)


def test_executor_measurements_without_observable_typed():
q = cirq.LineQubit(0)
circuit = cirq.Circuit(cirq.X.on(q))
executor = Executor(executor_measurements_typed)
with pytest.raises(
ValueError,
match="When using a measurement, or bitstring, like result",
):
executor.evaluate(circuit)


def test_executor_density_matrix_without_observable_typed():
q = cirq.LineQubit(0)
circuit = cirq.Circuit(cirq.X.on(q))
executor = Executor(executor_density_matrix_typed)
with pytest.raises(
ValueError,
match="When using a density matrix like result",
):
executor.evaluate(circuit)


def test_executor_float_not_typed():
executor = Executor(executor_serial)
executor_typed = Executor(executor_serial_typed)
qcirc = QuantumCircuit(1)
qcirc.h(0)
assert executor.evaluate(qcirc) == executor_typed.evaluate(qcirc)


def test_executor_density_matrix_not_typed():
obs = Observable(PauliString("Z"))
executor = Executor(executor_density_matrix)
executor_typed = Executor(executor_density_matrix_typed)
q = cirq.LineQubit(0)
circuit = cirq.Circuit(cirq.X.on(q))
assert np.allclose(
executor.evaluate(circuit, obs), executor_typed.evaluate(circuit, obs)
)


def test_executor_measurements_not_typed():
obs = Observable(PauliString("Z"))
executor = Executor(executor_measurements)
executor_typed = Executor(executor_measurements_typed)
q = cirq.LineQubit(0)
circuit = cirq.Circuit(cirq.X.on(q))
assert executor.evaluate(circuit, obs) == executor_typed.evaluate(
circuit, obs
)
Loading