-
-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Address executor and observable incompatibility #2514
base: main
Are you sure you want to change the base?
Changes from 14 commits
a6e09f6
1a42969
aa2597d
a5ec9d0
a08a87f
edcf03d
4225339
6a6fa13
1d447cf
163b045
2def914
7c76316
72c0168
dc32389
1333ebf
99c7374
bbd1298
02bd3f5
3c7b027
7c4a60f
a0e4cda
8a68dfe
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -40,6 +40,8 @@ | |
FloatLike = [ | ||
None, # Untyped executors are assumed to return floats. | ||
float, | ||
np.float32, | ||
np.float64, | ||
Iterable[float], | ||
List[float], | ||
Sequence[float], | ||
|
@@ -149,6 +151,29 @@ def evaluate( | |
"Expected observable to be hermitian. Continue with caution." | ||
) | ||
|
||
# Check executor and observable compatability with type hinting | ||
# If FloatLike is specified as a return and observable is used | ||
if self._executor_return_type in FloatLike and observable is not None: | ||
if self._executor_return_type is not None: | ||
raise ValueError( | ||
"When using a float like result, measurements should be " | ||
"included manually and an observable should not be " | ||
"used." | ||
) | ||
elif observable is None: | ||
# Type hinted as DensityMatrixLik but no observable is set | ||
if self._executor_return_type in DensityMatrixLike: | ||
raise ValueError( | ||
"When using a density matrix like result, an observable " | ||
"is required." | ||
) | ||
# Type hinted as MeasurementResulteLike but no observable is set | ||
elif self._executor_return_type in MeasurementResultLike: | ||
raise ValueError( | ||
"When using a measurement, or bitstring, like result, an " | ||
"observable is required." | ||
) | ||
Comment on lines
+163
to
+175
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. These checks make it impossible to use an There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It does make it impossible to use an |
||
|
||
# Get all required circuits to run. | ||
if ( | ||
observable is not None | ||
|
@@ -160,38 +185,55 @@ def evaluate( | |
for circuit_with_measurements in observable.measure_in(circuit) | ||
] | ||
result_step = observable.ngroups | ||
elif ( | ||
observable is not None | ||
and self._executor_return_type not in MeasurementResultLike | ||
and self._executor_return_type not in DensityMatrixLike | ||
): | ||
raise ValueError( | ||
"""Executor and observable are not compatible. Executors | ||
returning expectation values as float must be used with | ||
observable=None""" | ||
) | ||
else: | ||
all_circuits = circuits | ||
result_step = 1 | ||
|
||
# Run all required circuits. | ||
all_results = self.run(all_circuits, force_run_all, **kwargs) | ||
try: | ||
all_results = self.run(all_circuits, force_run_all, **kwargs) | ||
except Exception: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Catch specific exceptions when possible. What fails in the above? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This exception is coming from the backend which makes it tough to be more specific. This is being used to capture the scenario of a MeasurementResultLike expected return type, but without one being specified. In that situation, an observable should be passed in but we didn't go through the Qiskit: QiskitError: 'No counts for experiment "0"' The cost of re-running an experiment may not be worth it for this scenario. Of all the times the executors fail, how frequently do we with think this is the scenario and how costly is it to re-run in those other scenarios? |
||
if observable is not None and self._executor_return_type is None: | ||
all_circuits = [ | ||
circuit_with_measurements | ||
for circuit in circuits | ||
for circuit_with_measurements in observable.measure_in( | ||
circuit | ||
) | ||
] | ||
all_results = self.run(all_circuits, force_run_all, **kwargs) | ||
else: | ||
raise | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What happens here? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is checking if the observable is not none and trying to see if the result type was meant to be MeasurementResultLike. If the return type was specified then the measurements would have been added to the circuit(s).
Comment on lines
192
to
+206
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I like the idea of just executing one of the circuits and checking the return type. Does this mean we can move all the compatibility checks in lines 156-175 after this manual checking of return type? This would remove the need of _executor_return_type variable and no typehinting is needed anymore. Maybe I am missing something. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. A couple of things.
Since these are requirements, I would think it would be better to check and break out in bad scenarios, instead of trying to run the circuits and then error out. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah yes, you are right that it is running all circuits! I guess I was looking at lines 209-216 where you use the manual return type (of the first circuit?) to parse results. I was thinking to just run the first circuit to get the manual return type and do all the checks on observable compatibility afterwards. This is a completely different approach than using _executor_return_type, so understandable if we don't want to that. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The main use of |
||
|
||
# check returned type | ||
manual_return_type = None | ||
if len(all_results) > 0: | ||
manual_return_type = type(all_results[0]) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should we even care about the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If an executor returns something different than what is specified there will be other errors thrown. You are right, at this point we can stick with the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Fixed in 3c7b027 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Just another note on this. The I handle this with 7c4a60f by checking for if the result falls under a Sequence or Iterable and if so, I save the first element type to the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should I add more unit testing since I just introduced this EDIT: Too late. I add test and updated the check to see if the Sequence is empty in 8a68dfe |
||
|
||
# Parse the results. | ||
if self._executor_return_type in FloatLike: | ||
if ( | ||
self._executor_return_type in FloatLike | ||
and self._executor_return_type is not None | ||
) or manual_return_type in FloatLike: | ||
results = np.real_if_close( | ||
cast(Sequence[float], all_results) | ||
).tolist() | ||
|
||
elif self._executor_return_type in DensityMatrixLike: | ||
elif ( | ||
self._executor_return_type in DensityMatrixLike | ||
or manual_return_type in DensityMatrixLike | ||
): | ||
observable = cast(Observable, observable) | ||
all_results = cast(List[npt.NDArray[np.complex64]], all_results) | ||
results = [ | ||
observable._expectation_from_density_matrix(density_matrix) | ||
for density_matrix in all_results | ||
] | ||
|
||
elif self._executor_return_type in MeasurementResultLike: | ||
elif ( | ||
self._executor_return_type in MeasurementResultLike | ||
or manual_return_type in MeasurementResultLike | ||
): | ||
observable = cast(Observable, observable) | ||
all_results = cast(List[MeasurementResult], all_results) | ||
results = [ | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -12,6 +12,7 @@ | |
import numpy as np | ||
import pyquil | ||
import pytest | ||
from qiskit import QuantumCircuit | ||
|
||
from mitiq import MeasurementResult | ||
from mitiq.executor.executor import Executor | ||
|
@@ -37,7 +38,7 @@ def executor_batched_unique(circuits) -> List[float]: | |
return [executor_serial_unique(circuit) for circuit in circuits] | ||
|
||
|
||
def executor_serial_unique(circuit): | ||
def executor_serial_unique(circuit) -> float: | ||
return float(len(circuit)) | ||
|
||
|
||
|
@@ -58,21 +59,29 @@ def executor_pyquil_batched(programs) -> List[float]: | |
|
||
|
||
# Serial / batched executors which return measurements. | ||
def executor_measurements(circuit) -> MeasurementResult: | ||
def executor_measurements(circuit): | ||
return sample_bitstrings(circuit, noise_level=(0,)) | ||
|
||
|
||
def executor_measurements_typed(circuit) -> MeasurementResult: | ||
return sample_bitstrings(circuit, noise_level=(0,)) | ||
|
||
|
||
def executor_measurements_batched(circuits) -> List[MeasurementResult]: | ||
return [executor_measurements(circuit) for circuit in circuits] | ||
return [executor_measurements_typed(circuit) for circuit in circuits] | ||
|
||
|
||
# Serial / batched executors which return density matrices. | ||
def executor_density_matrix(circuit) -> np.ndarray: | ||
def executor_density_matrix(circuit): | ||
return compute_density_matrix(circuit, noise_level=(0,)) | ||
|
||
|
||
def executor_density_matrix_typed(circuit) -> np.ndarray: | ||
return compute_density_matrix(circuit, noise_level=(0,)) | ||
|
||
|
||
def executor_density_matrix_batched(circuits) -> List[np.ndarray]: | ||
return [executor_density_matrix(circuit) for circuit in circuits] | ||
return [executor_density_matrix_typed(circuit) for circuit in circuits] | ||
|
||
|
||
def test_executor_simple(): | ||
|
@@ -86,7 +95,7 @@ def test_executor_is_batched_executor(): | |
assert Executor.is_batched_executor(executor_batched) | ||
assert not Executor.is_batched_executor(executor_serial_typed) | ||
assert not Executor.is_batched_executor(executor_serial) | ||
assert not Executor.is_batched_executor(executor_measurements) | ||
assert not Executor.is_batched_executor(executor_measurements_typed) | ||
assert Executor.is_batched_executor(executor_measurements_batched) | ||
|
||
|
||
|
@@ -96,7 +105,7 @@ def test_executor_non_hermitian_observable(): | |
q = cirq.LineQubit(0) | ||
circuits = [cirq.Circuit(cirq.I.on(q)), cirq.Circuit(cirq.X.on(q))] | ||
|
||
executor = Executor(executor_measurements) | ||
executor = Executor(executor_measurements_typed) | ||
|
||
with pytest.warns(UserWarning, match="hermitian"): | ||
executor.evaluate(circuits, obs) | ||
|
@@ -199,53 +208,27 @@ def test_run_executor_preserves_order(s, b): | |
) | ||
def test_executor_evaluate_float(execute): | ||
q = cirq.LineQubit(0) | ||
circuits = [cirq.Circuit(cirq.X(q)), cirq.Circuit(cirq.H(q), cirq.Z(q))] | ||
circuits = [ | ||
cirq.Circuit(cirq.X(q), cirq.M(q)), | ||
cirq.Circuit(cirq.H(q), cirq.Z(q), cirq.M(q)), | ||
] | ||
|
||
executor = Executor(execute) | ||
|
||
results = executor.evaluate(circuits) | ||
assert np.allclose(results, [1, 2]) | ||
assert np.allclose(results, [2, 3]) | ||
bdg221 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
if execute is executor_serial_unique: | ||
assert executor.calls_to_executor == 2 | ||
else: | ||
assert executor.calls_to_executor == 1 | ||
|
||
assert executor.executed_circuits == circuits | ||
assert executor.quantum_results == [1, 2] | ||
|
||
|
||
@pytest.mark.parametrize( | ||
"execute", | ||
[ | ||
executor_batched, | ||
executor_batched_unique, | ||
executor_serial_unique, | ||
executor_serial_typed, | ||
executor_serial, | ||
executor_pyquil_batched, | ||
], | ||
) | ||
@pytest.mark.parametrize( | ||
"obs", | ||
[ | ||
PauliString("X"), | ||
PauliString("XZ"), | ||
PauliString("Z"), | ||
], | ||
) | ||
def test_executor_observable_compatibility_check(execute, obs): | ||
q = cirq.LineQubit(0) | ||
circuits = [cirq.Circuit(cirq.X(q)), cirq.Circuit(cirq.H(q), cirq.Z(q))] | ||
|
||
executor = Executor(execute) | ||
|
||
with pytest.raises(ValueError, match="are not compatible"): | ||
executor.evaluate(circuits, obs) | ||
Comment on lines
-237
to
-244
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Does this test need to be removed? I think I'm missing something because I don't quite see why these are not compatible. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. All of those executors return floats or list[floats]. We do not allow you to have an observables with an executor that is returning FloatLike. However, this was an old test from years ago. This is now covered in |
||
assert executor.quantum_results == [2, 3] | ||
|
||
|
||
@pytest.mark.parametrize( | ||
"execute", [executor_measurements, executor_measurements_batched] | ||
"execute", [executor_measurements_typed, executor_measurements_batched] | ||
) | ||
def test_executor_evaluate_measurements(execute): | ||
obs = Observable(PauliString("Z")) | ||
|
@@ -258,24 +241,24 @@ def test_executor_evaluate_measurements(execute): | |
results = executor.evaluate(circuits, obs) | ||
assert np.allclose(results, [1, -1]) | ||
|
||
if execute is executor_measurements: | ||
if execute is executor_measurements_typed: | ||
assert executor.calls_to_executor == 2 | ||
else: | ||
assert executor.calls_to_executor == 1 | ||
|
||
assert executor.executed_circuits[0] == circuits[0] + cirq.measure(q) | ||
assert executor.executed_circuits[1] == circuits[1] + cirq.measure(q) | ||
assert executor.quantum_results[0] == executor_measurements( | ||
assert executor.quantum_results[0] == executor_measurements_typed( | ||
circuits[0] + cirq.measure(q) | ||
) | ||
assert executor.quantum_results[1] == executor_measurements( | ||
assert executor.quantum_results[1] == executor_measurements_typed( | ||
circuits[1] + cirq.measure(q) | ||
) | ||
assert len(executor.quantum_results) == len(circuits) | ||
|
||
|
||
@pytest.mark.parametrize( | ||
"execute", [executor_density_matrix, executor_density_matrix_batched] | ||
"execute", [executor_density_matrix_typed, executor_density_matrix_batched] | ||
) | ||
def test_executor_evaluate_density_matrix(execute): | ||
obs = Observable(PauliString("Z")) | ||
|
@@ -288,16 +271,80 @@ def test_executor_evaluate_density_matrix(execute): | |
results = executor.evaluate(circuits, obs) | ||
assert np.allclose(results, [1, -1]) | ||
|
||
if execute is executor_density_matrix: | ||
if execute is executor_density_matrix_typed: | ||
assert executor.calls_to_executor == 2 | ||
else: | ||
assert executor.calls_to_executor == 1 | ||
|
||
assert executor.executed_circuits == circuits | ||
assert np.allclose( | ||
executor.quantum_results[0], executor_density_matrix(circuits[0]) | ||
executor.quantum_results[0], executor_density_matrix_typed(circuits[0]) | ||
) | ||
assert np.allclose( | ||
executor.quantum_results[1], executor_density_matrix(circuits[1]) | ||
executor.quantum_results[1], executor_density_matrix_typed(circuits[1]) | ||
) | ||
assert len(executor.quantum_results) == len(circuits) | ||
|
||
|
||
def test_executor_float_with_observable_typed(): | ||
obs = Observable(PauliString("Z")) | ||
q = cirq.LineQubit(0) | ||
circuit = cirq.Circuit(cirq.X.on(q)) | ||
executor = Executor(executor_serial_typed) | ||
with pytest.raises( | ||
ValueError, | ||
match="When using a float like result", | ||
): | ||
executor.evaluate(circuit, obs) | ||
|
||
|
||
def test_executor_measurements_without_observable_typed(): | ||
q = cirq.LineQubit(0) | ||
circuit = cirq.Circuit(cirq.X.on(q)) | ||
executor = Executor(executor_measurements_typed) | ||
with pytest.raises( | ||
ValueError, | ||
match="When using a measurement, or bitstring, like result", | ||
): | ||
executor.evaluate(circuit) | ||
|
||
|
||
def test_executor_density_matrix_without_observable_typed(): | ||
q = cirq.LineQubit(0) | ||
circuit = cirq.Circuit(cirq.X.on(q)) | ||
executor = Executor(executor_density_matrix_typed) | ||
with pytest.raises( | ||
ValueError, | ||
match="When using a density matrix like result", | ||
): | ||
executor.evaluate(circuit) | ||
|
||
|
||
def test_executor_float_not_typed(): | ||
executor = Executor(executor_serial) | ||
executor_typed = Executor(executor_serial_typed) | ||
qcirc = QuantumCircuit(1) | ||
qcirc.h(0) | ||
assert executor.evaluate(qcirc) == executor_typed.evaluate(qcirc) | ||
|
||
|
||
def test_executor_density_matrix_not_typed(): | ||
obs = Observable(PauliString("Z")) | ||
executor = Executor(executor_density_matrix) | ||
executor_typed = Executor(executor_density_matrix_typed) | ||
q = cirq.LineQubit(0) | ||
circuit = cirq.Circuit(cirq.X.on(q)) | ||
assert np.allclose( | ||
executor.evaluate(circuit, obs), executor_typed.evaluate(circuit, obs) | ||
) | ||
|
||
|
||
def test_executor_measurements_not_typed(): | ||
obs = Observable(PauliString("Z")) | ||
executor = Executor(executor_measurements) | ||
executor_typed = Executor(executor_measurements_typed) | ||
q = cirq.LineQubit(0) | ||
circuit = cirq.Circuit(cirq.X.on(q)) | ||
assert executor.evaluate(circuit, obs) == executor_typed.evaluate( | ||
circuit, obs | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Were these needed when adding further tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found the need to add these when running
make test
. Here is the result without those in FloatLike:And with those 2 lines, I don't see the errors:
@natestemen do you see this behavior too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I remembered why these two are in
FloatLike
. Since we are trying to assume things if a return type is not specified, we check the type of what is returned fromself.run()
. If we do not include them, then theifs
duringParse the results.
will fail, especially if we no longer look atself._executor_return_type
as mentioned #2514 (comment)