Control Methods
All optimizations in the QuantumControl
package are done by calling QuantumControl.optimize
, or preferably the high-level wrapper @optimize_or_load
. The actual control methods are implemented in separate packages. The module implementing a particular method should be passed to optimize
as the method
keyword argument.
QuantumControlBase.optimize
— MethodOptimize a quantum control problem.
result = optimize(problem; method, check=true, kwargs...)
optimizes towards a solution of given problem
with the given method
, which should be a Module
implementing the method, e.g.,
using Krotov
result = optimize(problem; method=Krotov)
Note that method
is a mandatory keyword argument.
If check
is true (default), the initial_state
and generator
of each trajectory is checked with check_state
and check_generator
. Any other keyword argument temporarily overrides the corresponding keyword argument in problem
. These arguments are available to the optimizer, see each optimization package's documentation for details.
To obtain the documentation for which options a particular method uses, run, e.g.,
? optimize(problem, ::Val{:Krotov})
where :Krotov
is the name of the module implementing the method. The above is also the method signature that a Module
wishing to implement a control method must define.
The returned result
object is specific to the optimization method.
The following methods of optimal control are implemented by packages in the JuliaQuantumControl organization:
Krotov's Method
See the documentation of the Krotov
package for more details.
QuantumControlBase.optimize
— Methodusing Krotov
result = optimize(problem; method=Krotov, kwargs...)
optimizes the given control problem
using Krotov's method, returning a KrotovResult
.
Keyword arguments that control the optimization are taken from the keyword arguments used in the instantiation of problem
; any of these can be overriden with explicit keyword arguments to optimize
.
Required problem keyword arguments
J_T
: A functionJ_T(ϕ, trajectories)
that evaluates the final time functional from a listϕ
of forward-propagated states andproblem.trajectories
.
Recommended problem keyword arguments
lambda_a=1.0
: The inverse Krotov step width λ_a for every pulse.update_shape=(t->1.0)
: A functionS(t)
for the "update shape" that scales the update for every pulse
If different controls require different lambda_a
or update_shape
, a dict pulse_options
must be given instead of a global lambda_a
and update_shape
, see below.
Optional problem keyword arguments
The following keyword arguments are supported (with default values):
pulse_options
: A dictionary that maps every control (as obtained byget_controls
from theproblem.trajectories
) to the following dict::lambda_a
: The value for inverse Krotov step width λₐ:update_shape
: A functionS(t)
for the "update shape" that scales the Krotov pulse update.
This overrides the global
lambda_a
andupdate_shape
arguments.chi
: A functionchi!(χ, ϕ, trajectories)
what receives a listϕ
of the forward propagated states and must set $|χₖ⟩ = -∂J_T/∂⟨ϕₖ|$. If not given, it will be automatically determined fromJ_T
viamake_chi
with the default parameters.sigma=nothing
: Function that calculate the second-order contribution. If not given, the first-order Krotov method is used.iter_start=0
: the initial iteration numberiter_stop=5000
: the maximum iteration numberprop_method
: The propagation method to use for each trajectory, see below.update_hook
: A function that receives the Krotov workspace, the iteration number, the list of updated pulses and the list of guess pulses as positional arguments. The function may mutate any of its arguments. This may be used e.g. to apply a spectral filter to the updated pulses or to perform similar manipulations.info_hook
: A function (or tuple of functions) that receives the same arguments asupdate_hook
, in order to write information about the current iteration to the screen or to a file. The defaultinfo_hook
prints a table with convergence information to the screen. Runs afterupdate_hook
. Theinfo_hook
function may return a tuple, which is stored in the list ofrecords
inside theKrotovResult
object.check_convergence
: a function to check whether convergence has been reached. Receives aKrotovResult
objectresult
, and should setresult.converged
totrue
andresult.message
to an appropriate string in case of convergence. Multiple convergence checks can be performed by chaining functions with∘
. The convergence check is performed after any calls toupdate_hook
andinfo_hook
.verbose=false
: Iftrue
, print information during initialization
Trajectory propagation
Krotov's method involves the forward and backward propagation for every Trajectory
in the problem
. The keyword arguments for each propagation (see propagate
) are determined from any properties of each Trajectory
that have a prop_
prefix, cf. init_prop_trajectory
.
In situations where different parameters are required for the forward and backward propagation, instead of the prop_
prefix, the fw_prop_
and bw_prop_
prefix can be used, respectively. These override any setting with the prop_
prefix. This applies both to the properties of each Trajectory
and the problem keyword arguments.
Note that the propagation method for each propagation must be specified. In most cases, it is sufficient (and recommended) to pass a global prop_method
problem keyword argument.
GRAPE
The Gradient Ascent Pulse Engineering (GRAPE) method is implemented in the GRAPE
package. See the GRAPE
documentation for details.
QuantumControlBase.optimize
— Methodusing GRAPE
result = optimize(problem; method=GRAPE, kwargs...)
optimizes the given control problem
via the GRAPE method, by minimizing the functional
\[J(\{ϵ_{nl}\}) = J_T(\{|ϕ_k(T)⟩\}) + λ_a J_a(\{ϵ_{nl}\})\]
where the final time functional $J_T$ depends explicitly on the forward-propagated states and the running cost $J_a$ depends explicitly on pulse values $ϵ_{nl}$ of the l'th control discretized on the n'th interval of the time grid.
Returns a GrapeResult
.
Keyword arguments that control the optimization are taken from the keyword arguments used in the instantiation of problem
; any of these can be overridden with explicit keyword arguments to optimize
.
Required problem keyword arguments
J_T
: A functionJ_T(ϕ, trajectories; τ=τ)
that evaluates the final time functional from a vectorϕ
of forward-propagated states andproblem.trajectories
. For alltrajectories
that define atarget_state
, the elementτₖ
of the vectorτ
will contain the overlap of the stateϕₖ
with thetarget_state
of thek
'th trajectory, orNaN
otherwise.
Optional problem keyword arguments
chi
: A functionchi!(χ, ϕ, trajectories)
what receives a listϕ
of the forward propagated states and must set $|χₖ⟩ = -∂J_T/∂⟨ϕₖ|$. If not given, it will be automatically determined fromJ_T
viamake_chi
with the default parameters.J_a
: A functionJ_a(pulsevals, tlist)
that evaluates running costs over the pulse values, wherepulsevals
are the vectorized values $ϵ_{nl}$, wheren
are in indices of the time intervals andl
are the indices over the controls, i.e.,[ϵ₁₁, ϵ₂₁, …, ϵ₁₂, ϵ₂₂, …]
(the pulse values for each control are contiguous). If not given, the optimization will not include a running cost.gradient_method=:gradgen
: One of:gradgen
(default) or:taylor
. Withgradient_method=:gradgen
, the gradient is calculated using QuantumGradientGenerators. Withgradient_method=:taylor
, it is evaluated via a Taylor series, see Eq. (20) in Kuprov and Rogers, J. Chem. Phys. 131, 234108 (2009) [17].taylor_grad_max_order=100
: If given withgradient_method=:taylor
, the maximum number of terms in the Taylor series. Iftaylor_grad_check_convergence=true
(default), if the Taylor series does not convergence within the given number of terms, throw an an error. Withtaylor_grad_check_convergence=true
, this is the exact order of the Taylor series.taylor_grad_tolerance=1e-16
: If given withgradient_method=:taylor
andtaylor_grad_check_convergence=true
, stop the Taylor series when the norm of the term falls below the given tolerance. Ignored iftaylor_grad_check_convergence=false
.taylor_grad_check_convergence=true
: If given astrue
(default), check the convergence after each term in the Taylor series an stop as soon as the norm of the term drops below the given number. Iffalse
, stop after exactlytaylor_grad_max_order
terms.lambda_a=1
: A weight for the running costJ_a
.grad_J_a
: A function to calculate the gradient ofJ_a
. If not given, it will be automatically determined.upper_bound
: An upper bound for the value of any optimized control. Time-dependent upper bounds can be specified viapulse_options
.lower_bound
: A lower bound for the value of any optimized control. Time-dependent lower bounds can be specified viapulse_options
.pulse_options
: A dictionary that maps every control (as obtained byget_controls
from theproblem.trajectories
) to a dict with the following possible keys::upper_bounds
: A vector of upper bound values, one for each intervals of the time grid. Values ofInf
indicate an unconstrained upper bound for that time interval, respectively the globalupper_bound
, if given.:lower_bounds
: A vector of lower bound values. Values of-Inf
indicate an unconstrained lower bound for that time interval,
update_hook
: Not implementedinfo_hook
: A function (or tuple of functions) that receives the same arguments asupdate_hook
, in order to write information about the current iteration to the screen or to a file. The defaultinfo_hook
prints a table with convergence information to the screen. Runs afterupdate_hook
. Theinfo_hook
function may return a tuple, which is stored in the list ofrecords
inside theGrapeResult
object.check_convergence
: A function to check whether convergence has been reached. Receives aGrapeResult
objectresult
, and should setresult.converged
totrue
andresult.message
to an appropriate string in case of convergence. Multiple convergence checks can be performed by chaining functions with∘
. The convergence check is performed after any calls toupdate_hook
andinfo_hook
.x_tol
: Parameter for Optim.jlf_tol
: Parameter for Optim.jlg_tol
: Parameter for Optim.jlshow_trace
: Parameter for Optim.jlextended_trace
: Parameter for Optim.jlshow_every
: Parameter for Optim.jlallow_f_increases
: Parameter for Optim.jloptimizer
: An optional Optim.jl optimizer (Optim.AbstractOptimizer
instance). If not given, an L-BFGS-B optimizer will be used.prop_method
: The propagation method to use for each trajectory, see below.verbose=false
: Iftrue
, print information during initialization
Trajectory propagation
GRAPE may involve three types of propagation:
- A forward propagation for every
Trajectory
in theproblem
- A backward propagation for every trajectory
- A backward propagation of a gradient generator for every trajectory.
The keyword arguments for each propagation (see propagate
) are determined from any properties of each Trajectory
that have a prop_
prefix, cf. init_prop_trajectory
.
In situations where different parameters are required for the forward and backward propagation, instead of the prop_
prefix, the fw_prop_
and bw_prop_
prefix can be used, respectively. These override any setting with the prop_
prefix. Similarly, properties for the backward propagation of the gradient generators can be set with properties that have a grad_prop_
prefix. These prefixes apply both to the properties of each Trajectory
and the problem keyword arguments.
Note that the propagation method for each propagation must be specified. In most cases, it is sufficient (and recommended) to pass a global prop_method
problem keyword argument.