# API

`GRAPE.GrapeResult`

— TypeResult object returned by `optimize_grape`

.

**Attributes**

The attributes of a `GrapeResult`

object include

`iter`

: The number of the current iteration`J_T`

: The value of the final-time functional in the current iteration`J_T_prev`

: The value of the final-time functional in the previous iteration`J_a`

: The value of the running cost $J_a$ in the current iteration (excluding $λ_a$)`J_a_prev`

: The value of $J_a$ in the previous iteration`tlist`

: The time grid on which the control are discetized.`guess_controls`

: A vector of the original control fields (each field discretized to the points of`tlist`

)- optimized_controls: A vector of the optimized control fields in the current iterations
- records: A vector of tuples with values returned by a
`callback`

routine passed to`optimize`

- converged: A boolean flag on whether the optimization is converged. This may be set to
`true`

by a`check_convergence`

function. - message: A message string to explain the reason for convergence. This may be set by a
`check_convergence`

function.

All of the above attributes may be referenced in a `check_convergence`

function passed to `optimize(problem; method=GRAPE)`

`GRAPE.GrapeWrk`

— TypeGRAPE Workspace.

The workspace is for internal use. However, it is also accessible in a `callback`

function. The callback may use or modify some of the following attributes:

`trajectories`

: a copy of the trajectories defining the control problem`adjoint_trajectories`

: The`trajectories`

with the adjoint generator`kwargs`

: The keyword arguments from the`ControlProblem`

or the call to`optimize`

.`controls`

: A tuple of the original controls (probably functions)`pulsevals_guess`

: The combined vector of pulse values that are the guess in the current iteration. Initially, the vector is the concatenation of discretizing`controls`

to the midpoints of the time grid.`pulsevals`

: The combined vector of updated pulse values in the current iteration.`gradient`

: The total gradient for the guess in the current iteration`grad_J_T`

: The*current*gradient for the final-time part of the functional. This is from the last evaluation of the gradient, which may be for the optimized pulse (depending on the internal of the optimizer)`grad_J_a`

: The*current*gradient for the running cost part of the functional.`J_parts`

: The two-component vector $[J_T, J_a]$`result`

: The current result object`upper_bounds`

: Upper bound for every`pulsevals`

;`+Inf`

indicates no bound.`lower_bounds`

: Lower bound for every`pulsevals`

;`-Inf`

indicates no bound.`fg_count`

: The total number of evaluations of the functional and evaluations of the gradient, as a two-element vector.`optimizer`

: The backend optimizer object`optimizer_state`

: The internal state object of the`optimizer`

(`nothing`

if the`optimizer`

has no internal state)`result`

: The current result object`tau_grads`

: The gradients ∂τₖ/ϵₗ(tₙ)`fw_storage`

: The storage of states for the forward propagation`fw_propagators`

: The propagators used for the forward propagation`bw_grad_propagators`

: The propagators used for the backward propagation of`QuantumGradientGenerators.GradVector`

states (`gradient_method=:gradgen`

only)`bw_propagators`

: The propagators used for the backward propagation (`gradient_method=:taylor`

only)`use_threads`

: Flag indicating whether the propagations are performed in parallel.

In addition, the following methods provide safer (non-mutating) access to information in the workspace

`GRAPE.gradient`

— MethodThe gradient in the current iteration.

`g = gradient(wrk; which=:initial)`

returns the gradient associated with the guess pulse of the current iteration. Up to quasi-Newton corrections, the negative gradient determines the `search_direction`

for the `pulse_update`

.

`g = gradient(wrk; which=:final)`

returns the gradient associated with the optimized pulse of the current iteration.

`GRAPE.make_grape_print_iters`

— MethodPrint optimization progress as a table.

This functions serves as the default `info_hook`

for an optimization with GRAPE.

`GRAPE.norm_search`

— MethodThe norm of the search direction vector in the current iteration.

`norm_search(wrk)`

returns `norm(search_direction(wrk))`

.

`GRAPE.optimize_grape`

— Method`GRAPE.pulse_update`

— MethodThe vector of pulse update values for the current iteration.

`Δu = pulse_update(wrk)`

returns a vector containing the different between the optimized pulse values and the guess pulse values of the current iteration. This should be proportional to `search_direction`

with the proportionality factor `step_width`

.

`GRAPE.search_direction`

— MethodThe search direction used in the current iteration.

`s = search_direction(wrk)`

returns the vector describing the search direction used in the current iteration. This should be proportional to `pulse_update`

with the proportionality factor `step_width`

.

`GRAPE.step_width`

— MethodThe step width used in the current iteration.

`α = step_width(wrk)`

returns the scalar `α`

so that `pulse_update(wrk) = α * search_direction(wrk)`

, see `pulse_update`

and `search_direction`

for the iteration described by the current `GrapeWrk`

(for the state of `wrk`

as available in the `callback`

of the current iteration.

`GRAPE.vec_angle`

— MethodThe angle between two vectors.

`ϕ = vec_angle(v1, v2; unit=:rad)`

returns the angle between two vectors in radians (or degrees, with `unit=:degree`

).

`QuantumControl.optimize`

— Method```
using GRAPE
result = optimize(problem; method=GRAPE, kwargs...)
```

optimizes the given control `problem`

via the GRAPE method, by minimizing the functional

\[J(\{ϵ_{nl}\}) = J_T(\{|ϕ_k(T)⟩\}) + λ_a J_a(\{ϵ_{nl}\})\]

where the final time functional $J_T$ depends explicitly on the forward-propagated states and the running cost $J_a$ depends explicitly on pulse values $ϵ_{nl}$ of the l'th control discretized on the n'th interval of the time grid.

Returns a `GrapeResult`

.

Keyword arguments that control the optimization are taken from the keyword arguments used in the instantiation of `problem`

; any of these can be overridden with explicit keyword arguments to `optimize`

.

**Required problem keyword arguments**

`J_T`

: A function`J_T(Ψ, trajectories)`

that evaluates the final time functional from a list`Ψ`

of forward-propagated states and`problem.trajectories`

. The function`J_T`

may also take a keyword argument`tau`

. If it does, a vector containing the complex overlaps of the target states (`target_state`

property of each trajectory in`problem.trajectories`

) with the propagated states will be passed to`J_T`

.

**Optional problem keyword arguments**

`chi`

: A function`chi(Ψ, trajectories)`

that receives a list`Ψ`

of the forward propagated states and returns a vector of states $|χₖ⟩ = -∂J_T/∂⟨Ψₖ|$. If not given, it will be automatically determined from`J_T`

via`make_chi`

with the default parameters. Similarly to`J_T`

, if`chi`

accepts a keyword argument`tau`

, it will be passed a vector of complex overlaps.`J_a`

: A function`J_a(pulsevals, tlist)`

that evaluates running costs over the pulse values, where`pulsevals`

are the vectorized values $ϵ_{nl}$, where`n`

are in indices of the time intervals and`l`

are the indices over the controls, i.e.,`[ϵ₁₁, ϵ₂₁, …, ϵ₁₂, ϵ₂₂, …]`

(the pulse values for each control are contiguous). If not given, the optimization will not include a running cost.`gradient_method=:gradgen`

: One of`:gradgen`

(default) or`:taylor`

. With`gradient_method=:gradgen`

, the gradient is calculated using QuantumGradientGenerators. With`gradient_method=:taylor`

, it is evaluated via a Taylor series, see Eq. (20) in Kuprov and Rogers, J. Chem. Phys. 131, 234108 (2009) [4].`taylor_grad_max_order=100`

: If given with`gradient_method=:taylor`

, the maximum number of terms in the Taylor series. If`taylor_grad_check_convergence=true`

(default), if the Taylor series does not convergence within the given number of terms, throw an an error. With`taylor_grad_check_convergence=true`

, this is the exact order of the Taylor series.`taylor_grad_tolerance=1e-16`

: If given with`gradient_method=:taylor`

and`taylor_grad_check_convergence=true`

, stop the Taylor series when the norm of the term falls below the given tolerance. Ignored if`taylor_grad_check_convergence=false`

.`taylor_grad_check_convergence=true`

: If given as`true`

(default), check the convergence after each term in the Taylor series an stop as soon as the norm of the term drops below the given number. If`false`

, stop after exactly`taylor_grad_max_order`

terms.`lambda_a=1`

: A weight for the running cost`J_a`

.`grad_J_a`

: A function to calculate the gradient of`J_a`

. If not given, it will be automatically determined. See`make_grad_J_a`

for the required interface.`upper_bound`

: An upper bound for the value of any optimized control. Time-dependent upper bounds can be specified via`pulse_options`

.`lower_bound`

: A lower bound for the value of any optimized control. Time-dependent lower bounds can be specified via`pulse_options`

.`pulse_options`

: A dictionary that maps every control (as obtained by`get_controls`

from the`problem.trajectories`

) to a dict with the following possible keys:`:upper_bounds`

: A vector of upper bound values, one for each intervals of the time grid. Values of`Inf`

indicate an unconstrained upper bound for that time interval, respectively the global`upper_bound`

, if given.`:lower_bounds`

: A vector of lower bound values. Values of`-Inf`

indicate an unconstrained lower bound for that time interval,

`print_iters=true`

: Whether to print information after each iteration.`print_iter_info=["iter.", "J_T", "|∇J|", "|Δϵ|", "ΔJ", "FG(F)", "secs"]`

: Which fields to print if`print_iters=true`

. If given, must be a list of header labels (strings), which can be any of the following:`"iter."`

: The iteration number`"J_T"`

: The value of the final-time functional for the dynamics under the optimized pulses`"J_a"`

: The value of the pulse-dependent running cost for the optimized pulses`"λ_a⋅J_a"`

: The total contribution of`J_a`

to the full functional`J`

`"J"`

: The value of the optimization functional for the optimized pulses`"ǁ∇J_Tǁ"`

: The ℓ²-norm of the*current*gradient of the final-time functional. Note that this is usually the gradient of the optimize pulse, not the guess pulse.`"ǁ∇J_aǁ"`

: The ℓ²-norm of the the*current*gradient of the pulse-dependent running cost. For comparison with`"ǁ∇J_Tǁ"`

.`"λ_aǁ∇J_aǁ"`

: The ℓ²-norm of the the*current*gradient of the complete pulse-dependent running cost term. For comparison with`"ǁ∇J_Tǁ"`

.`"ǁ∇Jǁ"`

: The norm of the guess pulse gradient. Note that the*guess*pulse gradient is not the same the*current*gradient.`"ǁΔϵǁ"`

: The ℓ²-norm of the pulse update`"ǁϵǁ"`

: The ℓ²-norm of optimized pulse values`"max|Δϵ|"`

The maximum value of the pulse update (infinity norm)`"max|ϵ|"`

: The maximum value of the pulse values (infinity norm)`"ǁΔϵǁ/ǁϵǁ"`

: The ratio of the pulse update tothe optimized pulse values`"∫Δϵ²dt"`

: The L²-norm of the pulse update, summed over all pulses. A convergence measure comparable (proportional) to the running cost in Krotov's method`"ǁsǁ"`

: The norm of the search direction. Should be`ǁΔϵǁ`

scaled by the step with`α`

.`"∠°"`

: The angle (in degrees) between the negative gradient`-∇J`

and the search direction`s`

.`"α"`

: The step width as determined by the line search (`Δϵ = α⋅s`

)`"ΔJ_T"`

: The change in the final time functional relative to the previous iteration`"ΔJ_a"`

: The change in the control-dependent running cost relative to the previous iteration`"λ_a⋅ΔJ_a"`

: The change in the control-dependent running cost term relative to the previous iteration.`"ΔJ"`

: The change in the total optimization functional relative to the previous iteration.`"FG(F)"`

: The number of functional/gradient evaluation (FG), or pure functional (F) evaluations`"secs"`

: The number of seconds of wallclock time spent on the iteration.`store_iter_info=[]`

: Which fields to store in`result.records`

, given as

a list of header labels, see

`print_iter_info`

.`callback`

: A function (or tuple of functions) that receives the GRAPE workspace and the iteration number. The function may return a tuple of values which are stored in the`GrapeResult`

object`result.records`

. The function can also mutate the workspace, in particular the updated`pulsevals`

. This may be used, e.g., to apply a spectral filter to the updated pulses or to perform similar manipulations. Note that`print_iters=true`

(default) adds an automatic callback to print information after each iteration. With`store_iter_info`

, that callback automatically stores a subset of the available information.`check_convergence`

: A function to check whether convergence has been reached. Receives a`GrapeResult`

object`result`

, and should set`result.converged`

to`true`

and`result.message`

to an appropriate string in case of convergence. Multiple convergence checks can be performed by chaining functions with`∘`

. The convergence check is performed after any`callback`

.`x_tol`

: Parameter for Optim.jl`f_tol`

: Parameter for Optim.jl`g_tol`

: Parameter for Optim.jl`show_trace`

: Parameter for Optim.jl`extended_trace`

: Parameter for Optim.jl`show_every`

: Parameter for Optim.jl`allow_f_increases`

: Parameter for Optim.jl`optimizer`

: An optional Optim.jl optimizer (`Optim.AbstractOptimizer`

instance). If not given, an L-BFGS-B optimizer will be used.`prop_method`

: The propagation method to use for each trajectory, see below.`verbose=false`

: If`true`

, print information during initialization`rethrow_exceptions`

: By default, any exception ends the optimization, but still returns a`GrapeResult`

that captures the message associated with the exception. This is to avoid losing results from a long-running optimization when an exception occurs in a later iteration. If`rethrow_exceptions=true`

, instead of capturing the exception, it will be thrown normally.

**Trajectory propagation**

GRAPE may involve three types of propagation:

- A forward propagation for every
`Trajectory`

in the`problem`

- A backward propagation for every trajectory
- A backward propagation of a gradient generator for every trajectory.

The keyword arguments for each propagation (see `propagate`

) are determined from any properties of each `Trajectory`

that have a `prop_`

prefix, cf. `init_prop_trajectory`

.

In situations where different parameters are required for the forward and backward propagation, instead of the `prop_`

prefix, the `fw_prop_`

and `bw_prop_`

prefix can be used, respectively. These override any setting with the `prop_`

prefix. Similarly, properties for the backward propagation of the gradient generators can be set with properties that have a `grad_prop_`

prefix. These prefixes apply both to the properties of each `Trajectory`

and the problem keyword arguments.

Note that the propagation method for each propagation must be specified. In most cases, it is sufficient (and recommended) to pass a global `prop_method`

problem keyword argument.