Skip to content

Commit

Permalink
Revised QML tools
Browse files Browse the repository at this point in the history
  • Loading branch information
madagra committed Oct 10, 2023
1 parent bde4870 commit 10a2267
Showing 1 changed file with 17 additions and 12 deletions.
29 changes: 17 additions & 12 deletions docs/qml/qml_tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ from qadence.draw import html_string # markdown-exec: hide
print(html_string(ansatz, size="4,4")) # markdown-exec: hide
```

Having a truly *hardware-efficient* ansatz means that the entangling operation can be chosen according to each device's native interactions. Besides digital operations, in Qadence it is also possible to build digital-analog HEAs with the entanglement produced by the natural evolution of a set of interacting qubits, as natively implemented in neutral atom devices. As with other digital-analog functions, this can be controlled with the `strategy` argument which can be chosen from the [`Strategy`](../qadence/types.md) enum type. Currently, only `Strategy.DIGITAL` and `Strategy.SDAQC` are available. By default, calling `strategy = Strategy.SDAQC` will use a global entangling Hamiltonian with Ising-like NN interactions and constant interaction strength inside a `HamEvo` operation,
Having a truly *hardware-efficient* ansatz means that the entangling operation can be chosen according to each device's native interactions. Besides digital operations, in Qadence it is also possible to build digital-analog HEAs with the entanglement produced by the natural evolution of a set of interacting qubits, as natively implemented in neutral atom devices. As with other digital-analog functions, this can be controlled with the `strategy` argument which can be chosen from the [`Strategy`](../qadence/types.md) enum type. Currently, only `Strategy.DIGITAL` and `Strategy.SDAQC` are available. By default, calling `strategy = Strategy.SDAQC` will use a global entangling Hamiltonian with Ising-like NN interactions and constant interaction strength,

```python exec="on" source="material-block" html="1" session="ansatz"
from qadence import Strategy
Expand All @@ -73,7 +73,7 @@ from qadence.draw import html_string # markdown-exec: hide
print(html_string(ansatz, size="4,4")) # markdown-exec: hide
```

Note that, by default, only the time-parameter is automatically parameterized when building a digital-analog HEA. However, as described in the [Hamiltonians tutorial](hamiltonians.md), arbitrary interaction Hamiltonians can be easily built with the `hamiltonian_factory` function, with both customized or fully parameterized interactions, and these can be directly passed as the `entangler` for a customizable digital-analog HEA.
Note that, by default, only the time-parameter is automatically parameterized when building a digital-analog HEA. However, as described in the [Hamiltonians tutorial](../tutorials/hamiltonians.md), arbitrary interaction Hamiltonians can be easily built with the `hamiltonian_factory` function, with both customized or fully parameterized interactions, and these can be directly passed as the `entangler` for a customizable digital-analog HEA.

```python exec="on" source="material-block" html="1" session="ansatz"
from qadence import hamiltonian_factory, Interaction, N, Register, hea
Expand Down Expand Up @@ -103,15 +103,18 @@ ansatz = hea(
from qadence.draw import html_string # markdown-exec: hide
print(html_string(ansatz, size="4,4")) # markdown-exec: hide
```
Qadence also offers a out-of-the-box training routine called `train_with_grad`
for optimizing fully-differentiable models like `QNN`s and `QuantumModel`s containing either *trainable* and/or *non-trainable* parameters (i.e., inputs). Feel free to [refresh your memory about different parameter types](/tutorials/parameters).

## Machine Learning Tools

`qadence` also offers a out-of-the-box training routine called `train_with_grad`
for optimizing fully-differentiable models like `QNN`s and `QuantumModel`s containing either *trainable* and/or *non-trainable* parameters (i.e., inputs). Feel free to [refresh your memory about different parameter types](/tutorials/parameters).
For training QML models, `qadence` also offers a few out-of-the-box routines for optimizing differentiable
models like `QNN`s and `QuantumModel`s containing either *trainable* and/or *non-trainable* parameters
(you can refer to [this](../tutorials/parameters) for a refresh about different parameter types):

`train_with_grad` performs training, logging/printing loss metrics and storing intermediate checkpoints of models.
* [`train_with_grad`][qadence.ml_tools.train_with_grad] for gradient-based optimization using PyTorch native optimizers
* [`train_gradient_free`][qadence.ml_tools.train_gradient_free] for gradient-free optimization using the [Nevergrad](https://facebookresearch.github.io/nevergrad/) library

These routines performs training, logging/printing loss metrics and storing intermediate checkpoints of models. In the following, we
use `train_with_grad` as example but the code can be directly used with the gradient-free routine.

As every other training routine commonly used in Machine Learning, it requires
`model`, `data` and an `optimizer` as input arguments.
Expand All @@ -133,7 +136,8 @@ def loss_fn(model: torch.nn.Module, data: torch.Tensor) -> tuple[torch.Tensor, d

```

The `TrainConfig` [qadence.ml_tools.config] tells `train_with_grad` what batch_size should be used, how many epochs to train, in which intervals to print/log metrics and how often to store intermediate checkpoints.
The [`TrainConfig`][qadence.ml_tools.config.TrainConfig] tells `train_with_grad` what batch_size should be used,
how many epochs to train, in which intervals to print/log metrics and how often to store intermediate checkpoints.

```python exec="on" source="material-block" result="json"
from qadence.ml_tools import TrainConfig
Expand All @@ -149,7 +153,10 @@ config = TrainConfig(
batch_size=batch_size,
)
```
## Fitting a funtion with a QNN using `ml_tools`

Let's see it in action with a simple example.

### Fitting a funtion with a QNN using `ml_tools`

Let's look at a complete example of how to use `train_with_grad` now.

Expand Down Expand Up @@ -206,13 +213,12 @@ train_with_grad(model, (x, y), optimizer, config, loss_fn=loss_fn)

plt.plot(y.numpy())
plt.plot(model(input_values).detach().numpy())

```

For users who want to use the low-level API of `qadence`, here is the example from above
written without `train_with_grad`.

## Fitting a function - Low-level API
### Fitting a function - Low-level API

```python exec="on" source="material-block" result="json"
from pathlib import Path
Expand Down Expand Up @@ -257,5 +263,4 @@ for i in range(n_epochs):
loss = criterion(out, y)
loss.backward()
optimizer.step()

```

0 comments on commit 10a2267

Please sign in to comment.