Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: TVD objective in GST #506

Draft
wants to merge 2 commits into
base: develop
Choose a base branch
from
Draft

WIP: TVD objective in GST #506

wants to merge 2 commits into from

Conversation

rileyjmurray
Copy link
Contributor

@rileyjmurray rileyjmurray commented Nov 20, 2024

I noticed there's a total variance distance objective function class which is almost finished. The only missing functions for the code to run without formally raising an error were RawTVDFunction.dterms and RawTVDFunction.chi2k_distributed_qty. The former function has a very simple "canonical" implementation, but it's numerically problematic since the 1-norm isn't differentiable at any vector with a zero component. I don't know what a mathematically correct version of the second function looks like (or if there is such an implementation) so I just have it return -1 to indicate "this doesn't mean what you think it means."

I'm mostly opening this PR to facilitate QPL-internal discussion on the topic.

You can run GST where the final iteration is TVD-based instead of log-likelihood using something like the following.

models, opt_results, cache = pygsti.algorithms.run_iterative_gst(
    dataset, starting_model, lsgst_circuit_list_of_lists,
    optimizer={'tol': 1e-5}, resource_alloc=None, verbosity=4,
    iteration_objfn_builders=['chi2'], final_objfn_builders=['tvd']
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant