Usage
The GRAPE
package is best used via the interface provided by the QuantumControl
framework, see the Relation to the QuantumControl Framework. It helps to be familiar with the concepts used in the framework and its overview.
The package can also be used standalone, as illustrated in the previous Tutorial, and encapsulated in the API of the GRAPE.optimize
function:
GRAPE.optimize
— FunctionSolve a quantum control problem using the GRAPE method.
using GRAPE
result = GRAPE.optimize(trajectories, tlist; J_T, kwargs...)
minimizes a functional
\[J(\{ϵ_{nl}\}) = J_T(\{|Ψ_k(T)⟩\}) + λ_a J_a(\{ϵ_{nl}\})\,,\]
via the GRAPE method, where the final time functional $J_T$ depends explicitly on the forward-propagated states $|Ψ_k(T)⟩$, where $|Ψ_k(t)⟩$ is the time evolution of the initial_state
in the $k$th' element of the trajectories
, and the running cost $J_a$ depends explicitly on pulse values $ϵ_{nl}$ of the l'th control discretized on the n'th interval of the time grid tlist
.
It does this by calculating the gradient of the final-time functional
\[\nabla J_T \equiv \frac{\partial J_T}{\partial ϵ_{nl}} = -2 \Re \underbrace{% \underbrace{\bigg\langle χ(T) \bigg\vert \hat{U}^{(k)}_{N_T} \dots \hat{U}^{(k)}_{n+1} \bigg \vert}_{\equiv \bra{\chi(t_n)}\;\text{(bw. prop.)}} \frac{\partial \hat{U}^{(k)}_n}{\partial ϵ_{nl}} }_{\equiv \bra{χ_k^\prime(t_{n-1})}} \underbrace{\bigg \vert \hat{U}^{(k)}_{n-1} \dots \hat{U}^{(k)}_1 \bigg\vert Ψ_k(t=0) \bigg\rangle}_{\equiv |\Psi(t_{n-1})⟩\;\text{(fw. prop.)}}\,,\]
where $\hat{U}^{(k)}_n$ is the time evolution operator for the $n$ the interval, generally assumed to be $\hat{U}^{(k)}_n = \exp[-i \hat{H}_{kn} dt_n]$, where $\hat{H}_{kn}$ is the operator obtained by evaluating trajectories[k].generator
on the $n$'th time interval.
The backward-propagation of $|\chi_k(t)⟩$ has the boundary condition
\[ |\chi_k(T)⟩ \equiv - \frac{\partial J_T}{\partial ⟨\Psi_k(T)|}\,.\]
The final-time gradient $\nabla J_T$ is combined with the gradient for the running costs, and the total gradient is then fed into an optimizer (L-BFGS-B by default) that iteratively changes the values $\{ϵ_{nl}\}$ to minimize $J$.
See Background for details.
Returns a GrapeResult
.
Positional arguments
trajectories
: A vector ofTrajectory
objects. Each trajectory contains aninitial_state
and a dynamicalgenerator
(e.g., time-dependent Hamiltonian). Each trajectory may also contain arbitrary additional attributes liketarget_state
to be used in theJ_T
functionaltlist
: A vector of time grid values.
Required keyword arguments
J_T
: A functionJ_T(Ψ, trajectories)
that evaluates the final time functional from a listΨ
of forward-propagated states andtrajectories
. The functionJ_T
may also take a keyword argumenttau
. If it does, a vector containing the complex overlaps of the target states (target_state
property of each trajectory intrajectories
) with the propagated states will be passed toJ_T
.
Optional keyword arguments
chi
: A functionchi(Ψ, trajectories)
that receives a listΨ
of the forward propagated states and returns a vector of states $|χₖ⟩ = -∂J_T/∂⟨Ψₖ|$. If not given, it will be automatically determined fromJ_T
viaQuantumControl.Functionals.make_chi
with the default parameters. Similarly toJ_T
, ifchi
accepts a keyword argumenttau
, it will be passed a vector of complex overlaps.chi_min_norm=1e-100
: The minimum allowable norm for any $|χₖ(T)⟩$. Smaller norms would mean that the gradient is zero, and will abort the optimization with an error.J_a
: A functionJ_a(pulsevals, tlist)
that evaluates running costs over the pulse values, wherepulsevals
are the vectorized values $ϵ_{nl}$, wheren
are in indices of the time intervals andl
are the indices over the controls, i.e.,[ϵ₁₁, ϵ₂₁, …, ϵ₁₂, ϵ₂₂, …]
(the pulse values for each control are contiguous). If not given, the optimization will not include a running cost.gradient_method=:gradgen
: One of:gradgen
(default) or:taylor
. Withgradient_method=:gradgen
, the gradient is calculated using QuantumGradientGenerators. Withgradient_method=:taylor
, it is evaluated via a Taylor series, see Eq. (20) in Kuprov and Rogers, J. Chem. Phys. 131, 234108 (2009) [22].taylor_grad_max_order=100
: If given withgradient_method=:taylor
, the maximum number of terms in the Taylor series. Iftaylor_grad_check_convergence=true
(default), if the Taylor series does not convergence within the given number of terms, throw an an error. Withtaylor_grad_check_convergence=true
, this is the exact order of the Taylor series.taylor_grad_tolerance=1e-16
: If given withgradient_method=:taylor
andtaylor_grad_check_convergence=true
, stop the Taylor series when the norm of the term falls below the given tolerance. Ignored iftaylor_grad_check_convergence=false
.taylor_grad_check_convergence=true
: If given astrue
(default), check the convergence after each term in the Taylor series an stop as soon as the norm of the term drops below the given number. Iffalse
, stop after exactlytaylor_grad_max_order
terms.lambda_a=1
: A weight for the running costJ_a
.grad_J_a
: A function to calculate the gradient ofJ_a
. If not given, it will be automatically determined. Seemake_grad_J_a
for the required interface.upper_bound
: An upper bound for the value of any optimized control. Time-dependent upper bounds can be specified viapulse_options
.lower_bound
: A lower bound for the value of any optimized control. Time-dependent lower bounds can be specified viapulse_options
.pulse_options
: A dictionary that maps every control (as obtained byget_controls
from thetrajectories
) to a dict with the following possible keys::upper_bounds
: A vector of upper bound values, one for each intervals of the time grid. Values ofInf
indicate an unconstrained upper bound for that time interval, respectively the globalupper_bound
, if given.:lower_bounds
: A vector of lower bound values. Values of-Inf
indicate an unconstrained lower bound for that time interval,
callback
: A function (or tuple of functions) that receives the GRAPE workspace and the iteration number. The function may return a tuple of values which are stored in theGrapeResult
objectresult.records
. The function can also mutate the workspace, in particular the updatedpulsevals
. This may be used, e.g., to apply a spectral filter to the updated pulses or to perform similar manipulations.check_convergence
: A function to check whether convergence has been reached. Receives aGrapeResult
objectresult
, and should setresult.converged
totrue
andresult.message
to an appropriate string in case of convergence. Multiple convergence checks can be performed by chaining functions with∘
. The convergence check is performed after anycallback
.prop_method
: The propagation method to use for each trajectory, see below.verbose=false
: Iftrue
, print information during initializationrethrow_exceptions
: By default, any exception ends the optimization, but still returns aGrapeResult
that captures the message associated with the exception. This is to avoid losing results from a long-running optimization when an exception occurs in a later iteration. Ifrethrow_exceptions=true
, instead of capturing the exception, it will be thrown normally.
Experimental keyword arguments
The following keyword arguments may change in non-breaking releases:
x_tol
: Parameter for Optim.jlf_tol
: Parameter for Optim.jlg_tol
: Parameter for Optim.jlshow_trace
: Parameter for Optim.jlextended_trace
: Parameter for Optim.jlshow_every
: Parameter for Optim.jlallow_f_increases
: Parameter for Optim.jloptimizer
: An optional Optim.jl optimizer (Optim.AbstractOptimizer
instance). If not given, an L-BFGS-B optimizer will be used.
Trajectory propagation
GRAPE may involve three types of time propagation, all of which are implemented via the QuantumPropagators
as a numerical backend:
- A forward propagation for every
Trajectory
in thetrajectories
- A backward propagation for every trajectory
- A backward propagation of a gradient generator for every trajectory.
The keyword arguments for each propagation (see propagate
) are determined from any properties of each Trajectory
that have a prop_
prefix, cf. init_prop_trajectory
.
In situations where different parameters are required for the forward and backward propagation, instead of the prop_
prefix, the fw_prop_
and bw_prop_
prefix can be used, respectively. These override any setting with the prop_
prefix. Similarly, properties for the backward propagation of the gradient generators can be set with properties that have a grad_prop_
prefix. These prefixes apply both to the properties of each Trajectory
and the keyword arguments.
Note that the propagation method for each propagation must be specified. In most cases, it is sufficient (and recommended) to pass a global prop_method
keyword argument.
Relation to the QuantumControl Framework
The GRAPE
package is associated with the broader QuantumControl
framework. The role of QuantumControl
in relation to GRAPE
has two aspects:
QuantumControl
provides a collection of components that are useful for formulating control problems in general, for solution viaGRAPE
or arbitrary other methods of quantum control. This includes, for example, control functions and control amplitudes, data structures for time-dependent Hamiltonians or Liouvillians, or common optimization functionals.QuantumControl
provides a common way to formulate aControlProblem
and generaloptimize
and@optimize_or_load
functions that particular optimization packages likeGRAPE
can plug in to. The aim is to encourage a common interface between different optimization packages that makes it easy to switch between different methods.
QuantumControl.optimize
— Methodusing GRAPE
result = optimize(problem; method=GRAPE, kwargs...)
optimizes the given QuantumControl.ControlProblem
using the GRAPE (Gradient-Ascent Pulse Engineering) method.
Delegates to
result = GRAPE.optimize(
problem.trajectories, problem.tlist; problem.kwargs..., kwargs...
)
See GRAPE.optimize
for details and supported keyword arguments.
Compared to calling GRAPE.optimize
directly, the QuantumControl.optimize
wrapper adds the following additional keyword arguments:
check=true
: Iftrue
(default), test that all the objects stored in the trajectories implement the required interfaces correctlyprint_iters=true
: Whether to print information after each iteration.print_iter_info=["iter.", "J_T", "|∇J|", "|Δϵ|", "ΔJ", "FG(F)", "secs"]
: Which fields to print ifprint_iters=true
. Seemake_grape_print_iters
store_iter_info=[]
: Which fields to store inresult.records
, given as a list of header labels, seeprint_iter_info
. Seemake_grape_print_iters
These options still allow for the normal callback
argument.
The GRAPE optimization may also be initiated via QuantumControl.@optimize_or_load
, which additionally adds checkpointing, to ensure that an optimization result is dumped to disk in case of an unexpected shutdown.