API Reference
This page provides all docstrings locally defined in the QuantumControl
package for both private and public symbols. See also the summary of the public API.
QuantumControl
exposes local exported and unexported local symbols as well as re-exporting symbols and sub-modules from the QuantumPropagators subpackage and some of its submodules.
The QuantumControl
submodules provide additional public functionality. Note that some of the most commonly used symbols from QuantumControl
's submodules may also be re-exported at the top-level (such as @optimize_or_load
from the QuantumControl.Workflows
submodule).
Local Exported Symbols
QuantumControl.ControlProblem
— TypeA full control problem with multiple trajectories.
ControlProblem(
trajectories,
tlist;
kwargs...
)
The trajectories
are a list of Trajectory
instances, each defining an initial state and a dynamical generator for the evolution of that state. Usually, the trajectory will also include a target state (see Trajectory
) and possibly a weight. The trajectories
may also be given together with tlist
as a mandatory keyword argument.
The tlist
is the time grid on which the time evolution of the initial states of each trajectory should be propagated. It may also be given as a (mandatory) keyword argument.
The remaining kwargs
are keyword arguments that are passed directly to the optimal control method. These typically include e.g. the optimization functional.
The control problem is solved by finding a set of controls that minimize an optimization functional over all trajectories.
QuantumControl.Trajectory
— TypeDescription of a state's time evolution.
Trajectory(
initial_state,
generator;
target_state=nothing,
weight=1.0,
kwargs...
)
describes the time evolution of the initial_state
under a time-dependent dynamical generator
(e.g., a Hamiltonian or Liouvillian).
Trajectories are central to quantum control problems: an optimization functional depends on the result of propagating one or more trajectories. For example, when optimizing for a quantum gate, the optimization considers the trajectories of all logical basis states.
In addition to the initial_state
and generator
, a Trajectory
may include data relevant to the propagation and to evaluating a particular optimization functional. Most functionals have the notion of a "target state" that the initial_state
should evolve towards, which can be given as the target_state
keyword argument. In some functionals, different trajectories enter with different weights [18], which can be given as a weight
keyword argument. Any other keyword arguments are also available to a functional as properties of the Trajectory
.
A Trajectory
can also be instantiated using all keyword arguments.
Properties
All keyword arguments used in the instantiation are available as properties of the Trajectory
. At a minimum, this includes initial_state
, generator
, target_state
, and weight
.
By convention, properties with a prop_
prefix, e.g., prop_method
, will be taken into account when propagating the trajectory. See propagate_trajectory
for details.
QuantumControl.optimize
— FunctionOptimize a quantum control problem.
result = optimize(
problem;
method, # mandatory keyword argument
check=true,
callback=nothing,
print_iters=true,
kwargs...
)
optimizes towards a solution of given problem
with the given method
, which should be a Module
implementing the method, e.g.,
using Krotov
result = optimize(problem; method=Krotov)
If check
is true (default), the initial_state
and generator
of each trajectory is checked with check_state
and check_generator
. Any other keyword argument temporarily overrides the corresponding keyword argument in problem
. These arguments are available to the optimizer, see each optimization package's documentation for details.
The callback
can be given as a function to be called after each iteration in order to analyze the progress of the optimization or to modify the state of the optimizer or the current controls. The signature of callback
is method-specific, but callbacks should receive a workspace objects as the first parameter as the first argument, the iteration number as the second parameter, and then additional method-specific parameters.
The callback
function may return a tuple of values, and an optimization method should store these values fore each iteration in a records
field in their Result
object. The callback
should be called once with an iteration number of 0
before the first iteration. The callback
can also be given as a tuple of vector of functions, which are automatically combined via chain_callbacks
.
If print_iters
is true
(default), an automatic callback
is created via the method-specific make_print_iters
to print the progress of the optimization after each iteration. This automatic callback runs after any manually given callback
.
All remaining keyword argument are method-specific. To obtain the documentation for which options a particular method uses, run, e.g.,
? optimize(problem, ::Val{:Krotov})
where :Krotov
is the name of the module implementing the method. The above is also the method signature that a Module
wishing to implement a control method must define.
The returned result
object is specific to the optimization method, but should be a subtype of QuantumControl.AbstractOptimizationResult
.
using Krotov
result = optimize(problem; method=Krotov, kwargs...)
optimizes the given control problem
using Krotov's method, by minimizing the functional
\[J(\{ϵ_l(t)\}) = J_T(\{|Ψ_k(T)⟩\}) + ∑_l \int_{0}^{T} \frac{λ_{a,l}}{S_l(t)} [ϵ_l(t) - ϵ_l^{(0)}(t)]^2 \, dt\,,\]
cf. the general form of a quantum control functional. The "reference field" $ϵ_l^{(0)}(t)$ is the guess control for that particular iteration. The above functional implies a first-order update equation
\[Δϵ_l(t) = \frac{S_l(t)}{λ_{a,l}} \Im ∑_k \left[ \Big\langle \chi_k^{(0)}(t) \Big\vert \frac{\partial \hat{H}_k}{\partial ϵ_l(t)} \Big\vert \Psi_k(t) \Big\rangle \right]\,,\]
where $|\chi^{(0)}_k(t)⟩$ is the state backward-propagated under $Ĥ_k^{\dagger}(\{ϵ_l^{(0)}(t)\})$ with the boundary condition $|\chi_k(T)⟩ = \partial J_T / \partial ⟨Ψ_k^{(0)}(T)|$ and $Ĥ_k$ is the generator
of the $k$'th trajectory.
Note that the particular control-dependent running cost in the above functional is required to obtain the given Krotov update equation. Other running costs, or state-dependent running costs are not supported in this implementation of Krotov's method (even though some running costs are mathematically compatible with Krotov's method).
Returns a KrotovResult
.
Keyword arguments that control the optimization are taken from the keyword arguments used in the instantiation of problem
; any of these can be overridden with explicit keyword arguments to optimize
.
Required problem keyword arguments
J_T
: A functionJ_T(Ψ, trajectories)
that evaluates the final time functional from a listΨ
of forward-propagated states andproblem.trajectories
. The functionJ_T
may also take a keyword argumenttau
. If it does, a vector containing the complex overlaps of the target states (target_state
property of each trajectory inproblem.trajectories
) with the propagated states will be passed toJ_T
.
Recommended problem keyword arguments
lambda_a=1.0
: The inverse Krotov step width λₐ for every pulse.update_shape=(t->1.0)
: A functionS(t)
for the "update shape" that scales the update for every pulse.
If different controls require different lambda_a
or update_shape
, a dict pulse_options
must be given instead of a global lambda_a
and update_shape
; see below.
Optional problem keyword arguments
The following keyword arguments are supported (with default values):
pulse_options
: A dictionary that maps every control (as obtained byget_controls
from theproblem.trajectories
) to the following dict::lambda_a
: The value for inverse Krotov step width λₐ.:update_shape
: A functionS(t)
for the "update shape" that scales the Krotov pulse update.
This overrides the global
lambda_a
andupdate_shape
arguments.chi
: A functionchi(Ψ, trajectories)
that receives a listΨ
of the forward propagated states and returns a vector of states $|χₖ⟩ = -∂J_T/∂⟨Ψₖ|$. If not given, it will be automatically determined fromJ_T
viamake_chi
with the default parameters. Similarly toJ_T
, ifchi
accepts a keyword argumenttau
, it will be passed a vector of complex overlaps.sigma=nothing
: A function that calculates the second-order contribution. If not given, the first-order Krotov method is used.iter_start=0
: The initial iteration number.iter_stop=5000
: The maximum iteration number.prop_method
: The propagation method to use for each trajectory; see below.print_iters=true
: Whether to print information after each iteration.store_iter_info=Set()
: Which fields fromprint_iters
to store inresult.records
. A subset ofSet(["iter.", "J_T", "∫gₐ(t)dt", "J", "ΔJ_T", "ΔJ", "secs"])
.callback
: A function (or tuple of functions) that receives the Krotov workspace, the iteration number, the list of updated pulses, and the list of guess pulses as positional arguments. The function may return a tuple of values which are stored in theKrotovResult
objectresult.records
. The function can also mutate any of its arguments, in particular the updated pulses. This may be used, e.g., to apply a spectral filter to the updated pulses or to perform similar manipulations. Note thatprint_iters=true
(default) adds an automatic callback to print information after each iteration. Withstore_iter_info
, that callback automatically stores a subset of the printed information.check_convergence
: A function to check whether convergence has been reached. Receives aKrotovResult
objectresult
, and should setresult.converged
totrue
andresult.message
to an appropriate string in case of convergence. Multiple convergence checks can be performed by chaining functions with∘
. The convergence check is performed after anycallback
.verbose=false
: Iftrue
, print information during initialization.rethrow_exceptions
: By default, any exception ends the optimization but still returns aKrotovResult
that captures the message associated with the exception. This is to avoid losing results from a long-running optimization when an exception occurs in a later iteration. Ifrethrow_exceptions=true
, instead of capturing the exception, it will be thrown normally.
Trajectory propagation
Krotov's method involves the forward and backward propagation for every Trajectory
in the problem
. The keyword arguments for each propagation (see propagate
) are determined from any properties of each Trajectory
that have a prop_
prefix, cf. init_prop_trajectory
.
In situations where different parameters are required for the forward and backward propagation, instead of the prop_
prefix, the fw_prop_
and bw_prop_
prefixes can be used, respectively. These override any setting with the prop_
prefix. This applies both to the properties of each Trajectory
and the problem keyword arguments.
Note that the propagation method for each propagation must be specified. In most cases, it is sufficient (and recommended) to pass a global prop_method
problem keyword argument.
using GRAPE
result = optimize(problem; method=GRAPE, kwargs...)
optimizes the given control problem
via the GRAPE method, by minimizing the functional
\[J(\{ϵ_{nl}\}) = J_T(\{|ϕ_k(T)⟩\}) + λ_a J_a(\{ϵ_{nl}\})\]
where the final time functional $J_T$ depends explicitly on the forward-propagated states and the running cost $J_a$ depends explicitly on pulse values $ϵ_{nl}$ of the l'th control discretized on the n'th interval of the time grid.
Returns a GrapeResult
.
Keyword arguments that control the optimization are taken from the keyword arguments used in the instantiation of problem
; any of these can be overridden with explicit keyword arguments to optimize
.
Required problem keyword arguments
J_T
: A functionJ_T(Ψ, trajectories)
that evaluates the final time functional from a listΨ
of forward-propagated states andproblem.trajectories
. The functionJ_T
may also take a keyword argumenttau
. If it does, a vector containing the complex overlaps of the target states (target_state
property of each trajectory inproblem.trajectories
) with the propagated states will be passed toJ_T
.
Optional problem keyword arguments
chi
: A functionchi(Ψ, trajectories)
that receives a listΨ
of the forward propagated states and returns a vector of states $|χₖ⟩ = -∂J_T/∂⟨Ψₖ|$. If not given, it will be automatically determined fromJ_T
viamake_chi
with the default parameters. Similarly toJ_T
, ifchi
accepts a keyword argumenttau
, it will be passed a vector of complex overlaps.J_a
: A functionJ_a(pulsevals, tlist)
that evaluates running costs over the pulse values, wherepulsevals
are the vectorized values $ϵ_{nl}$, wheren
are in indices of the time intervals andl
are the indices over the controls, i.e.,[ϵ₁₁, ϵ₂₁, …, ϵ₁₂, ϵ₂₂, …]
(the pulse values for each control are contiguous). If not given, the optimization will not include a running cost.gradient_method=:gradgen
: One of:gradgen
(default) or:taylor
. Withgradient_method=:gradgen
, the gradient is calculated using QuantumGradientGenerators. Withgradient_method=:taylor
, it is evaluated via a Taylor series, see Eq. (20) in Kuprov and Rogers, J. Chem. Phys. 131, 234108 (2009) [17].taylor_grad_max_order=100
: If given withgradient_method=:taylor
, the maximum number of terms in the Taylor series. Iftaylor_grad_check_convergence=true
(default), if the Taylor series does not convergence within the given number of terms, throw an an error. Withtaylor_grad_check_convergence=true
, this is the exact order of the Taylor series.taylor_grad_tolerance=1e-16
: If given withgradient_method=:taylor
andtaylor_grad_check_convergence=true
, stop the Taylor series when the norm of the term falls below the given tolerance. Ignored iftaylor_grad_check_convergence=false
.taylor_grad_check_convergence=true
: If given astrue
(default), check the convergence after each term in the Taylor series an stop as soon as the norm of the term drops below the given number. Iffalse
, stop after exactlytaylor_grad_max_order
terms.lambda_a=1
: A weight for the running costJ_a
.grad_J_a
: A function to calculate the gradient ofJ_a
. If not given, it will be automatically determined. Seemake_grad_J_a
for the required interface.upper_bound
: An upper bound for the value of any optimized control. Time-dependent upper bounds can be specified viapulse_options
.lower_bound
: A lower bound for the value of any optimized control. Time-dependent lower bounds can be specified viapulse_options
.pulse_options
: A dictionary that maps every control (as obtained byget_controls
from theproblem.trajectories
) to a dict with the following possible keys::upper_bounds
: A vector of upper bound values, one for each intervals of the time grid. Values ofInf
indicate an unconstrained upper bound for that time interval, respectively the globalupper_bound
, if given.:lower_bounds
: A vector of lower bound values. Values of-Inf
indicate an unconstrained lower bound for that time interval,
print_iters=true
: Whether to print information after each iteration.print_iter_info=["iter.", "J_T", "|∇J|", "|Δϵ|", "ΔJ", "FG(F)", "secs"]
: Which fields to print ifprint_iters=true
. If given, must be a list of header labels (strings), which can be any of the following:"iter."
: The iteration number"J_T"
: The value of the final-time functional for the dynamics under the optimized pulses"J_a"
: The value of the pulse-dependent running cost for the optimized pulses"λ_a⋅J_a"
: The total contribution ofJ_a
to the full functionalJ
"J"
: The value of the optimization functional for the optimized pulses"ǁ∇J_Tǁ"
: The ℓ²-norm of the current gradient of the final-time functional. Note that this is usually the gradient of the optimize pulse, not the guess pulse."ǁ∇J_aǁ"
: The ℓ²-norm of the the current gradient of the pulse-dependent running cost. For comparison with"ǁ∇J_Tǁ"
."λ_aǁ∇J_aǁ"
: The ℓ²-norm of the the current gradient of the complete pulse-dependent running cost term. For comparison with"ǁ∇J_Tǁ"
."ǁ∇Jǁ"
: The norm of the guess pulse gradient. Note that the guess pulse gradient is not the same the current gradient."ǁΔϵǁ"
: The ℓ²-norm of the pulse update"ǁϵǁ"
: The ℓ²-norm of optimized pulse values"max|Δϵ|"
The maximum value of the pulse update (infinity norm)"max|ϵ|"
: The maximum value of the pulse values (infinity norm)"ǁΔϵǁ/ǁϵǁ"
: The ratio of the pulse update tothe optimized pulse values"∫Δϵ²dt"
: The L²-norm of the pulse update, summed over all pulses. A convergence measure comparable (proportional) to the running cost in Krotov's method"ǁsǁ"
: The norm of the search direction. Should beǁΔϵǁ
scaled by the step withα
."∠°"
: The angle (in degrees) between the negative gradient-∇J
and the search directions
."α"
: The step width as determined by the line search (Δϵ = α⋅s
)"ΔJ_T"
: The change in the final time functional relative to the previous iteration"ΔJ_a"
: The change in the control-dependent running cost relative to the previous iteration"λ_a⋅ΔJ_a"
: The change in the control-dependent running cost term relative to the previous iteration."ΔJ"
: The change in the total optimization functional relative to the previous iteration."FG(F)"
: The number of functional/gradient evaluation (FG), or pure functional (F) evaluations"secs"
: The number of seconds of wallclock time spent on the iteration.store_iter_info=[]
: Which fields to store inresult.records
, given as
a list of header labels, see
print_iter_info
.callback
: A function (or tuple of functions) that receives the GRAPE workspace and the iteration number. The function may return a tuple of values which are stored in theGrapeResult
objectresult.records
. The function can also mutate the workspace, in particular the updatedpulsevals
. This may be used, e.g., to apply a spectral filter to the updated pulses or to perform similar manipulations. Note thatprint_iters=true
(default) adds an automatic callback to print information after each iteration. Withstore_iter_info
, that callback automatically stores a subset of the available information.check_convergence
: A function to check whether convergence has been reached. Receives aGrapeResult
objectresult
, and should setresult.converged
totrue
andresult.message
to an appropriate string in case of convergence. Multiple convergence checks can be performed by chaining functions with∘
. The convergence check is performed after anycallback
.x_tol
: Parameter for Optim.jlf_tol
: Parameter for Optim.jlg_tol
: Parameter for Optim.jlshow_trace
: Parameter for Optim.jlextended_trace
: Parameter for Optim.jlshow_every
: Parameter for Optim.jlallow_f_increases
: Parameter for Optim.jloptimizer
: An optional Optim.jl optimizer (Optim.AbstractOptimizer
instance). If not given, an L-BFGS-B optimizer will be used.prop_method
: The propagation method to use for each trajectory, see below.verbose=false
: Iftrue
, print information during initializationrethrow_exceptions
: By default, any exception ends the optimization, but still returns aGrapeResult
that captures the message associated with the exception. This is to avoid losing results from a long-running optimization when an exception occurs in a later iteration. Ifrethrow_exceptions=true
, instead of capturing the exception, it will be thrown normally.
Trajectory propagation
GRAPE may involve three types of propagation:
- A forward propagation for every
Trajectory
in theproblem
- A backward propagation for every trajectory
- A backward propagation of a gradient generator for every trajectory.
The keyword arguments for each propagation (see propagate
) are determined from any properties of each Trajectory
that have a prop_
prefix, cf. init_prop_trajectory
.
In situations where different parameters are required for the forward and backward propagation, instead of the prop_
prefix, the fw_prop_
and bw_prop_
prefix can be used, respectively. These override any setting with the prop_
prefix. Similarly, properties for the backward propagation of the gradient generators can be set with properties that have a grad_prop_
prefix. These prefixes apply both to the properties of each Trajectory
and the problem keyword arguments.
Note that the propagation method for each propagation must be specified. In most cases, it is sufficient (and recommended) to pass a global prop_method
problem keyword argument.
QuantumControl.propagate_trajectories
— FunctionPropagate multiple trajectories in parallel.
result = propagate_trajectories(
trajectories, tlist; use_threads=true, kwargs...
)
runs propagate_trajectory
for every trajectory in trajectories
, collects and returns a vector of results. The propagation happens in parallel if use_threads=true
(default). All keyword parameters are passed to propagate_trajectory
, except that if initial_state
is given, it must be a vector of initial states, one for each trajectory. Likewise, to pass pre-allocated storage arrays to storage
, a vector of storage arrays must be passed. A simple storage=true
will still work to return a vector of storage results.
QuantumControl.propagate_trajectory
— FunctionPropagate a Trajectory
.
propagate_trajectory(
traj,
tlist;
initial_state=traj.initial_state,
kwargs...
)
propagates initial_state
under the dynamics described by traj.generator
. It takes the same keyword arguments as QuantumPropagators.propagate
, with default values from any property of traj
with a prop_
prefix (prop_method
, prop_inplace
, prop_callback
, …). See init_prop_trajectory
for details.
Note that method
(a mandatory keyword argument in QuantumPropagators.propagate
) must be specified, either as a property prop_method
of the trajectory, or by passing a method
keyword argument explicitly.
Local Unexported Symbols
QuantumControl.set_default_ad_framework
— FunctionSet the default provider for automatic differentiation.
QuantumControl.set_default_ad_framework(mod; quiet=false)
registers the given module (package) as the default AD framework.
This determines the default setting for the automatic
parameter in the following functions:
QuantumControl.Functionals.make_chi
QuantumControl.Functionals.make_gate_chi
QuantumControl.Functionals.make_grad_J_a
The given mod
must be a supported AD framework, e.g.,
import Zygote
QuantumControl.set_default_ad_framework(Zygote)
Currently, there is built-in support for Zygote
and FiniteDifferences
.
For other packages to be used as the default AD framework, the appropriate methods for make_chi
etc. must be defined.
Unless quiet=true
, calling set_default_ad_framework
will show a message to confirm the setting.
To unset the default AD framework, use
QuantumControl.set_default_ad_framework(nothing)
QuantumControl.AbstractOptimizationResult
— TypeAbstract type for the result object returned by optimize
. Any optimization method implemented on top of QuantumControl
should subtype from AbstractOptimizationResult
. This enables conversion between the results of different methods, allowing one method to continue an optimization from another method.
In order for this to work seamlessly, result objects should use a common set of field names as much as a possible. When a result object requires fields that cannot be provided by all other result objects, it should have default values for these field, which can be defined in a custom Base.convert
method, as, e.g.,
function Base.convert(::Type{MyResult}, result::AbstractOptimizationResult)
defaults = Dict{Symbol,Any}(
:f_calls => 0,
:fg_calls => 0,
)
return convert(MyResult, result, defaults)
end
Where f_calls
and fg_calls
are fields of MyResult
that are not present in a given result
of a different type. The three-argument convert
is defined internally for any AbstractOptimizationResult
.
$\gdef\tgt{\text{tgt}}$ $\gdef\tr{\operatorname{tr}}$ $\gdef\Re{\operatorname{Re}}$ $\gdef\Im{\operatorname{Im}}$
List of Submodules
QuantumControl
has the following sub-modules:
QuantumControl.Amplitudes
QuantumControl.Controls
QuantumControl.Functionals
QuantumControl.Generators
QuantumControl.Interfaces
QuantumControl.PulseParameterizations
QuantumControl.Shapes
QuantumControl.Storage
QuantumControl.Workflows
QuantumControl.Amplitudes
Re-exported Symbols:
QuantumControl.Controls
Public Symbols:
Re-exported Symbols:
ParameterizedFunction
discretize
discretize_on_midpoints
evaluate
evaluate!
get_controls
get_parameters
get_tlist_midpoints
substitute
t_mid
Public Symbols
QuantumControl.Controls.get_control_deriv
— FunctionGet the derivative of the generator $G$ w.r.t. the control $ϵ(t)$.
μ = get_control_deriv(generator, control)
returns nothing
if the generator
(Hamiltonian or Liouvillian) does not depend on control
, or a generator
\[μ = \frac{∂G}{∂ϵ(t)}\]
otherwise. For linear control terms, μ
will be a static operator, e.g. an AbstractMatrix
or an Operator
. For non-linear controls, μ
will be time-dependent, e.g. a Generator
. In either case, evaluate
should be used to evaluate μ
into a constant operator for particular values of the controls and a particular point in time.
For constant generators, e.g. an Operator
, the result is always nothing
.
a = get_control_deriv(ampl, control)
returns the derivative $∂a_l(t)/∂ϵ_{l'}(t)$ of the given amplitude $a_l(\{ϵ_{l''}(t)\}, t)$ with respect to the given control $ϵ_{l'}(t)$. For "trivial" amplitudes, where $a_l(t) ≡ ϵ_l(t)$, the result with be either 1.0
or 0.0
(depending on whether ampl ≡ control
). For non-trivial amplitudes, the result may be another amplitude that depends on the controls and potentially on time, but can be evaluated to a constant with evaluate
.
QuantumControl.Controls.get_control_derivs
— FunctionGet a vector of the derivatives of generator
w.r.t. each control.
get_control_derivs(generator, controls)
return as vector containing the derivative of generator
with respect to each control in controls
. The elements of the vector are either nothing
if generator
does not depend on that particular control, or a function μ(α)
that evaluates the derivative for a particular value of the control, see get_control_deriv
.
QuantumControl.Functionals
Public Symbols:
Private Symbols:
Public Symbols
QuantumControl.Functionals.J_T_re
— FunctionReal-part functional.
J_T_re(Ψ, trajectories; tau=taus(Ψ, trajectories), τ=tau)
calculates
\[J_{T,\text{re}} = 1 - F_{\text{re}} \quad\in \begin{cases} [0, 2] & \text{in Hilbert space} \\ [0, 1] & \text{in Liouville space.} \end{cases}\]
All arguments are passed to f_tau
while evaluating $F_{\text{re}}$ in F_re
.
Reference
- [12] Palao and Kosloff, Phys. Rev. A 68, 062308 (2003)
QuantumControl.Functionals.J_T_sm
— FunctionSquare-modulus functional.
J_T_sm(Ψ, trajectories; tau=taus(Ψ, trajectories), τ=tau)
calculates
\[J_{T,\text{sm}} = 1 - F_{\text{sm}} \quad\in [0, 1].\]
All arguments are passed to f_tau
while evaluating $F_{\text{sm}}$ in F_sm
.
Reference
- [12] Palao and Kosloff, Phys. Rev. A 68, 062308 (2003)
QuantumControl.Functionals.J_T_ss
— FunctionState-to-state phase-insensitive functional.
J_T_ss(Ψ, trajectories; tau=taus(Ψ, trajectories); τ=tau)
calculates
\[J_{T,\text{ss}} = 1 - F_{\text{ss}} \in [0, 1].\]
All arguments are passed to F_ss
.
Reference
- [12] Palao and Kosloff, Phys. Rev. A 68, 062308 (2003)
QuantumControl.Functionals.J_a_fluence
— FunctionRunning cost for the pulse fluence.
J_a = J_a_fluence(pulsevals, tlist)
calculates
\[J_a = \sum_l \int_0^T |ϵ_l(t)|^2 dt = \left(\sum_{nl} |ϵ_{nl}|^2 \right) dt\]
where $ϵ_{nl}$ are the values in the (vectorized) pulsevals
, n
is the index of the intervals of the time grid, and $dt$ is the time step, taken from the first time interval of tlist
and assumed to be uniform.
QuantumControl.Functionals.gate_functional
— FunctionConvert a functional from acting on a gate to acting on propagated states.
J_T = gate_functional(J_T_U; kwargs...)
constructs a functional J_T
that meets the requirements for for Krotov/GRAPE and make_chi
. That is, the output J_T
takes positional positional arguments Ψ
and trajectories
. The input functional J_T_U
is assumed to have the signature J_T_U(U; kwargs...)
where U
is a matrix with elements $U_{ij} = ⟨Ψ_i|Ψ_j⟩$, where $|Ψ_i⟩$ is the initial_state
of the i'th trajectories
(assumed to be the i'th canonical basis state) and $|Ψ_j⟩$ is the result of forward-propagating $|Ψ_j⟩$. That is, U
is the projection of the time evolution operator into the subspace defined by the basis in the initial_states
of the trajectories
.
See also
make_gate_chi
— create a correspondingchi
function that acts more efficiently than the generalmake_chi
.
QuantumControl.Functionals.make_chi
— FunctionReturn a function that calculates $|χ_k⟩ = -∂J_T/∂⟨Ψ_k|$.
chi = make_chi(
J_T,
trajectories;
mode=:any,
automatic=:default,
via=(any(isnothing(t.target_state) for t in trajectories) ? :states : :tau),
)
creates a function chi(Ψ, trajectories; τ)
that returns a vector of states χ
with $|χ_k⟩ = -∂J_T/∂⟨Ψ_k|$, where $|Ψ_k⟩$ is the k'th element of Ψ
. These are the states used as the boundary condition for the backward propagation propagation in Krotov's method and GRAPE. Each $|χₖ⟩$ is defined as a matrix calculus Wirtinger derivative,
\[|χ_k(T)⟩ = -\frac{∂J_T}{∂⟨Ψ_k|} = -\frac{1}{2} ∇_{Ψ_k} J_T\,;\qquad ∇_{Ψ_k} J_T ≡ \frac{∂J_T}{\Re[Ψ_k]} + i \frac{∂J_T}{\Im[Ψ_k]}\,.\]
The function J_T
must take a vector of states Ψ
and a vector of trajectories
as positional parameters. If via=:tau
, it must also a vector tau
as a keyword argument, see e.g. J_T_sm
). that contains the overlap of the states Ψ
with the target states from the trajectories
The derivative can be calculated analytically of automatically (via automatic differentiation) depending on the value of mode
. For mode=:any
, an analytic derivative is returned if available, with a fallback to an automatic derivative.
If mode=:analytic
, return an analytically known $-∂J_T/∂⟨Ψ_k|$, e.g.,
QuantumControl.Functionals.J_T_sm
→QuantumControl.Functionals.chi_sm
,QuantumControl.Functionals.J_T_re
→QuantumControl.Functionals.chi_re
,QuantumControl.Functionals.J_T_ss
→QuantumControl.Functionals.chi_ss
.
and throw an error if no analytic derivative is known.
If mode=:automatic
, return an automatic derivative (even if an analytic derivative is known). The calculation of an automatic derivative (whether via mode=:any
or mode=:automatic
) requires that a suitable framework (e.g., Zygote
or FiniteDifferences
) has been loaded. The loaded module must be passed as automatic
keyword argument. Alternatively, it can be registered as a default value for automatic
by calling QuantumControl.set_default_ad_framework
.
When evaluating $|χ_k⟩$ automatically, if via=:states
is given , $|χ_k(T)⟩$ is calculated directly as defined above from the gradient with respect to the states $\{|Ψ_k(T)⟩\}$.
If via=:tau
is given instead, the functional $J_T$ is considered a function of overlaps $τ_k = ⟨Ψ_k^\tgt|Ψ_k(T)⟩$. This requires that all trajectories
define a target_state
and that J_T
calculates the value of the functional solely based on the values of tau
passed as a keyword argument. With only the complex conjugate $τ̄_k = ⟨Ψ_k(T)|Ψ_k^\tgt⟩$ having an explicit dependency on $⟨Ψ_k(T)|$, the chain rule in this case is
\[|χ_k(T)⟩ = -\frac{∂J_T}{∂⟨Ψ_k|} = -\left( \frac{∂J_T}{∂τ̄_k} \frac{∂τ̄_k}{∂⟨Ψ_k|} \right) = - \frac{1}{2} (∇_{τ_k} J_T) |Ψ_k^\tgt⟩\,.\]
Again, we have used the definition of the Wirtinger derivatives,
\[\begin{align*} \frac{∂J_T}{∂τ_k} &≡ \frac{1}{2}\left( \frac{∂ J_T}{∂ \Re[τ_k]} - i \frac{∂ J_T}{∂ \Im[τ_k]} \right)\,,\\ \frac{∂J_T}{∂τ̄_k} &≡ \frac{1}{2}\left( \frac{∂ J_T}{∂ \Re[τ_k]} + i \frac{∂ J_T}{∂ \Im[τ_k]} \right)\,, \end{align*}\]
and the definition of the Zygote gradient with respect to a complex scalar,
\[∇_{τ_k} J_T = \left( \frac{∂ J_T}{∂ \Re[τ_k]} + i \frac{∂ J_T}{∂ \Im[τ_k]} \right)\,.\]
In order to extend make_chi
with an analytic implementation for a new J_T
function, define a new method make_analytic_chi
like so:
QuantumControl.Functionals.make_analytic_chi(::typeof(J_T_sm), trajectories) = chi_sm
which links make_chi
for QuantumControl.Functionals.J_T_sm
to QuantumControl.Functionals.chi_sm
.
Zygote is notorious for being buggy (silently returning incorrect gradients). Always test automatic derivatives against finite differences and/or other automatic differentiation frameworks.
QuantumControl.Functionals.make_gate_chi
— FunctionReturn a function to evaluate $|χ_k⟩ = -∂J_T(Û)/∂⟨Ψ_k|$ via the chain rule.
chi = make_gate_chi(J_T_U, trajectories; automatic=:default, kwargs...)
returns a function equivalent to
chi = make_chi(
gate_functional(J_T_U; kwargs...),
trajectories;
mode=:automatic,
automatic,
)
\[\begin{split} |χ_k⟩ &= -\frac{∂}{∂⟨Ψ_k|} J_T \\ &= - \frac{1}{2} \sum_i (∇_U J_T)_{ik} \frac{∂ U_{ik}}{∂⟨Ψ_k|} \\ &= - \frac{1}{2} \sum_i (∇_U J_T)_{ik} |Ψ_i⟩ \end{split}\]
where $|Ψ_i⟩$ is the basis state stored as the initial_state
of the i'th trajectory, see gate_functional
.
The gradient $∇_U J_T$ is obtained via automatic differentiation (AD). This requires that an AD package has been loaded (e.g., using Zygote
). This package must either be passed as the automatic
keyword argument, or the package must be set as the default AD provider using QuantumControl.set_default_ad_framework
.
Compared to the more general make_chi
with mode=:automatic
, make_gate_chi
will generally have a slightly smaller numerical overhead, as it pushes the use of automatic differentiation down by one level.
QuantumControl.Functionals.make_grad_J_a
— FunctionReturn a function to evaluate $∂J_a/∂ϵ_{ln}$ for a pulse value running cost.
grad_J_a = make_grad_J_a(
J_a,
tlist;
mode=:any,
automatic=:default,
)
returns a function so that ∇J_a = grad_J_a(pulsevals, tlist)
sets that retrurns a vector ∇J_a
containing the vectorized elements $∂J_a/∂ϵ_{ln}$. The function J_a
must have the interface J_a(pulsevals, tlist)
, see, e.g., J_a_fluence
.
The parameters mode
and automatic
are handled as in make_chi
, where mode
is one of :any
, :analytic
, :automatic
, and automatic
is he loaded module of an automatic differentiation framework, where :default
refers to the framework set with QuantumControl.set_default_ad_framework
.
In order to extend make_grad_J_a
with an analytic implementation for a new J_a
function, define a new method make_analytic_grad_J_a
like so:
make_analytic_grad_J_a(::typeof(J_a_fluence), tlist) = grad_J_a_fluence
which links make_grad_J_a
for J_a_fluence
to grad_J_a_fluence
.
Private Symbols
QuantumControl.Functionals.chi_ss
— FunctionBackward boundary states $|χ⟩$ for functional J_T_ss
.
χ = chi_ss(Ψ, trajectories; tau=taus(Ψ, trajectories), τ=tau)
calculates the vector of states χ
according to
\[|χ_k⟩ = -\frac{∂ J_{T,\text{ss}}}{∂ ⟨Ψ_k(T)|} = \frac{1}{N} w_k τ_k |Ψ^{\tgt}_k⟩\,,\]
with $|Ψ^{\tgt}_k⟩$, $τ_k$ and $w_k$ as defined in f_tau
.
Note: this function can be obtained with make_chi(J_T_ss, trajectories)
.
QuantumControl.Functionals.F_ss
— FunctionState-to-state phase-insensitive fidelity.
F_ss(Ψ, trajectories; tau=taus(Ψ, trajectories), τ=tau)
calculates
\[F_{\text{ss}} = \frac{1}{N} \sum_{k=1}^{N} w_k |τ_k|^2 \quad\in [0, 1]\]
with $N$, $w_k$ and $τ_k$ as in f_tau
.
Reference
- [12] Palao and Kosloff, Phys. Rev. A 68, 062308 (2003)
QuantumControl.Functionals.chi_re
— FunctionBackward boundary states $|χ⟩$ for functional J_T_re
.
χ chi_re(Ψ, trajectories; tau=taus(Ψ, trajectories), τ=tau)
calculates the vector of states χ
according to
\[|χ_k⟩ = -\frac{∂ J_{T,\text{re}}}{∂ ⟨Ψ_k(T)|} = \frac{1}{2N} w_k |Ψ^{\tgt}_k⟩\]
with $|Ψ^{\tgt}_k⟩$ and $w_k$ as defined in f_tau
.
Note: this function can be obtained with make_chi(J_T_re, trajectories)
.
QuantumControl.Functionals.chi_sm
— FunctionBackward boundary states $|χ⟩$ for functional J_T_sm
.
χ = chi_sm(Ψ, trajectories; tau=taus(Ψ, trajectories), τ=tau)
calculates the vector of states χ
according to
\[|χ_k⟩ = -\frac{\partial J_{T,\text{sm}}}{\partial ⟨Ψ_k(T)|} = \frac{1}{N^2} w_k \sum_{j}^{N} w_j τ_j |Ψ_k^{\tgt}⟩\]
with $|Ψ^{\tgt}_k⟩$, $τ_j$ and $w_k$ as defined in f_tau
.
Note: this function can be obtained with make_chi(J_T_sm, trajectories)
.
QuantumControl.Functionals.f_tau
— FunctionAverage complex overlap of the target states with forward-propagated states.
f_tau(Ψ, trajectories; tau=taus(Ψ, trajectories), τ=tau)
calculates
\[f_τ = \frac{1}{N} \sum_{k=1}^{N} w_k τ_k\]
with
\[τ_k = ⟨Ψ_k^\tgt|Ψ_k(T)⟩\]
in Hilbert space, or
\[τ_k = \tr[ρ̂_k^{\tgt\,\dagger} ρ̂_k(T)]\]
in Liouville space, where $|Ψ_k⟩$ or $ρ̂_k$ are the elements of Ψ
, and $|Ψ_k^\tgt⟩$ or $ρ̂_k^\tgt$ are the target states from the target_state
field of the trajectories
. If tau
/τ
is given as a keyword argument, it must contain the values τ_k
according to the above definition. Otherwise, the $τ_k$ values will be calculated internally, see taus
.
$N$ is the number of trajectories, and $w_k$ is the weight
attribute for each trajectory. The weights are not automatically normalized, they are assumed to have values such that the resulting $f_τ$ lies in the unit circle of the complex plane. Usually, this means that the weights should sum to $N$.
Reference
- [12] Palao and Kosloff, Phys. Rev. A 68, 062308 (2003)
QuantumControl.Functionals.taus!
— FunctionOverlaps of target states with propagates states, calculated in-place.
taus!(τ, Ψ, trajectories; ignore_missing_target_state=false)
overwrites the complex vector τ
with the results of taus(Ψ, trajectories)
.
Throws an ArgumentError
if any of trajectories have a target_state
of nothing
. If ignore_missing_target_state=true
, values in τ
instead will remain unchanged for any trajectories with a missing target state.
QuantumControl.Functionals.taus
— FunctionOverlaps of target states with propagates states
τ = taus(Ψ, trajectories)
calculates a vector of values $τ_k = ⟨Ψ_k^{tgt}|Ψ_k⟩$ where $|Ψ_k^{tgt}⟩$ is the traj.target_state
of the $k$'th element of trajectories
and $|Ψₖ⟩$ is the $k$'th element of Ψ
.
The definition of the τ values with $Ψ_k^{tgt}$ on the left (overlap of target states with propagated states, as opposed to overlap of propagated states with target states) matches Refs. [12] and [19].
The function requires that each trajectory defines a target state. See also taus!
for an in-place version that includes well-defined error handling for any trajectories whose target_state
property is nothing
.
QuantumControl.Functionals.F_sm
— FunctionSquare-modulus fidelity.
F_sm(Ψ, trajectories; tau=taus(Ψ, trajectories), τ=tau)
calculates
\[F_{\text{sm}} = |f_τ|^2 = \left\vert\frac{1}{N} \sum_{k=1}^{N} w_k τ_k\right\vert^2 = \frac{1}{N^2} \sum_{k=1}^{N} \sum_{j=1}^{N} w_k w_j τ̄_k τ_j \quad\in [0, 1]\,,\]
with $w_k$ the weight for the k'th trajectory and $τ_k$ the overlap of the k'th propagated state with the k'th target state, $τ̄_k$ the complex conjugate of $τ_k$, and $N$ the number of trajectories.
All arguments are passed to f_tau
to evaluate $f_τ$.
Reference
- [12] Palao and Kosloff, Phys. Rev. A 68, 062308 (2003)
QuantumControl.Functionals.F_re
— FunctionReal-part fidelity.
F_re(Ψ, trajectories; tau=taus(Ψ, trajectories), τ=tau)
calculates
\[F_{\text{re}} = \Re[f_{τ}] = \Re\left[ \frac{1}{N} \sum_{k=1}^{N} w_k τ_k \right] \quad\in \begin{cases} [-1, 1] & \text{in Hilbert space} \\ [0, 1] & \text{in Liouville space.} \end{cases}\]
with $w_k$ the weight for the k'th trajectory and $τ_k$ the overlap of the k'th propagated state with the k'th target state, and $N$ the number of trajectories.
All arguments are passed to f_tau
to evaluate $f_τ$.
Reference
- [12] Palao and Kosloff, Phys. Rev. A 68, 062308 (2003)
QuantumControl.Functionals.grad_J_a_fluence
— FunctionAnalytic derivative for J_a_fluence
.
∇J_a = grad_J_a_fluence(pulsevals, tlist)
returns the ∇J_a
, which contains the (vectorized) elements $2 ϵ_{nl} dt$, where $ϵ_{nl}$ are the (vectorized) elements of pulsevals
and $dt$ is the time step, taken from the first time interval of tlist
and assumed to be uniform.
QuantumControl.Generators
Re-exported Symbols:
QuantumControl.Interfaces
Public Symbols:
Re-exported Symbols:
check_control
check_operator
check_parameterized
check_parameterized_function
check_state
supports_inplace
Public Symbols
QuantumControl.Interfaces.check_amplitude
— FunctionCheck an amplitude in a Generator
in the context of optimal control.
@test check_amplitude(
ampl; tlist, for_gradient_optimization=true, quiet=false
)
verifies that the given ampl
is a valid element in the list of amplitudes
of a Generator
object. This checks all the conditions of QuantumPropagators.Interfaces.check_amplitude
. In addition, the following conditions must be met.
If for_gradient_optimization
:
- The function
get_control_deriv(ampl, control)
must be defined - If
ampl
does not depend oncontrol
,get_control_deriv(ampl, control)
must return0.0
- If
ampl
depends oncontrol
,u = get_control_deriv(ampl, control)
must return an objectu
so thatevaluate(u, tlist, n)
returns a Number. In most cases,u
itself will be a Number. For more unusual amplitudes, e.g., an amplitude with a non-linear dependency on the controls,u
may be another amplitude. The controls inu
(as obtained byQuantumPropagators.Controls.get_controls
) must be a subset of the controls inampl
.
The function returns true
for a valid amplitude and false
for an invalid amplitude. Unless quiet=true
, it will log an error to indicate which of the conditions failed.
QuantumControl.Interfaces.check_generator
— FunctionCheck the dynamical generator
in the context of optimal control.
@test check_generator(
generator; state, tlist,
for_expval=true, for_pwc=true, for_time_continuous=false,
for_parameterization=false, for_gradient_optimization=true,
atol=1e-15, quiet=false
)
verifies the given generator
. This checks all the conditions of QuantumPropagators.Interfaces.check_generator
. In addition, the following conditions must be met.
If for_gradient_optimization
:
get_control_derivs(generator, controls)
must be defined and return a vector containing the result ofget_control_deriv(generator, control)
for everycontrol
incontrols
.get_control_deriv(generator, control)
must return an object that passes the less restrictiveQuantumPropagators.Interfaces.check_generator
ifcontrol
is inget_controls(generator)
. The controls in the derivative (if any) must be a subset of the controls ingenerator.
get_control_deriv(generator, control)
must returnnothing
ifcontrol
is not inget_controls(generator)
- If
generator
is aGenerator
instance, everyampl
ingenerator.amplitudes
must passcheck_amplitude(ampl; tlist)
.
The function returns true
for a valid generator and false
for an invalid generator. Unless quiet=true
, it will log an error to indicate which of the conditions failed.
QuantumControl.PulseParameterizations
Public Symbols:
LogisticParameterization
LogisticSqParameterization
ParameterizedAmplitude
SquareParameterization
TanhParameterization
TanhSqParameterization
Private Symbols:
Public Symbols
QuantumControl.PulseParameterizations.LogisticParameterization
— FunctionParameterization with a Logistic function that enforces a_min < a(t) < a_max
.
QuantumControl.PulseParameterizations.LogisticSqParameterization
— FunctionParameterization with a Logistic-Square function that enforces 0 ≤ a(t) < a_max
.
QuantumControl.PulseParameterizations.ParameterizedAmplitude
— TypeAn amplitude determined by a pulse parameterization.
That is, $a(t) = a(ϵ(t))$ with a bijective mapping between the value of $a(t)$ and $ϵ(t)$, e.g. $a(t) = ϵ^2(t)$ (a SquareParameterization
). Optionally, the amplitude may be multiplied with an additional shape function, cf. ShapedAmplitude
.
ampl = ParameterizedAmplitude(control; parameterization)
initializes $a(t) = a(ϵ(t)$ where $ϵ(t)$ is the control
, and the mandatory keyword argument parameterization
is a PulseParameterization
. The control
must either be a vector of values discretized to the midpoints of a time grid, or a callable control(t)
.
ampl = ParameterizedAmplitude(control; parameterization, shape=shape)
initializes $a(t) = S(t) a(ϵ(t))$ where $S(t)$ is the given shape
. It must be a vector if control
is a vector, or a callable shape(t)
if control
is a callable.
ampl = ParameterizedAmplitude(control, tlist; parameterization, shape=shape)
discretizes control
and shape
(if given) to the midpoints of tlist
before initialization.
ampl = ParameterizedAmplitude(
amplitude, tlist; parameterization, shape=shape, parameterize=true
)
initializes $ã(t) = S(t) a(t)$ where $a(t)$ is the input amplitude
. First, if amplitude
is a callable amplitude(t)
, it is discretized to the midpoints of tlist
. Then, a control
$ϵ(t)$ is calculated so that $a(t) ≈ a(ϵ(t))$. Clippling may occur if the values in amplitude
cannot represented with the given parameterization
. Lastly, ParameterizedAmplitude(control; parameterization, shape)
is initialized with the calculated control
.
Note that the tlist
keyword argument is required when parameterize=true
is given, even if amplitude
is already a vector.
QuantumControl.PulseParameterizations.SquareParameterization
— FunctionParameterization a(t) = ϵ²(t), enforcing pulse values $a(t) ≥ 0$.
QuantumControl.PulseParameterizations.TanhParameterization
— FunctionParameterization with a tanh function that enforces a_min < a(t) < a_max
.
QuantumControl.PulseParameterizations.TanhSqParameterization
— FunctionParameterization with a tanh² function that enforces 0 ≤ a(t) < a_max
.
Private Symbols
QuantumControl.PulseParameterizations.PulseParameterization
— TypeSpecification for a "time-local" pulse parameterization.
The parameterization is given as a collection of three functions:
- $a(ϵ(t))$
- $ϵ(a(t))$
- $∂a/∂ϵ$ as a function of $ϵ(t)$.
QuantumControl.Shapes
Re-exported Symbols:
QuantumControl.Storage
Re-exported Symbols:
QuantumControl.Workflows
Public Symbols:
Public Symbols
QuantumControl.Workflows.@optimize_or_load
— MacroRun optimize
and store the result, or load the result if it exists.
result = @optimize_or_load(
file,
problem;
method,
force=false,
verbose=true,
metadata=nothing,
logfile=nothing,
kwargs...
)
runs result = optimize(problem; method, kwargs...)
and stores result
in file
in the JLD2 format. Note that the method
keyword argument is mandatory.
In addition to the result
, the data in the output file
can also contain metadata. By default, this is "script" with the file name and line number of where @optimize_or_load
was called, as well as data from the dict metadata
mapping arbitrary (string) keys to values. Lastly, the data contains truncated captured output (up to 1kB of both the beginning and end of the output) from the optimization.
If logfile
is given as the path to a file, both stdout
and stderr
from optimize
are redirected into the given file.
If file
already exists (and force=false
), load the result
from that file instead of running the optimization, and print any (truncated) captured output.
All other kwargs
are passed directly to optimize
.
For methods that support this, @optimize_or_load
will set up a callback to dump the optimization result to file
in case of an unexpected program shutdown, see set_atexit_save_optimization
.
Related Functions
run_or_load
— a function for more general long-running calculations.load_optimization
: Function to load a file produced by@optimize_or_load
QuantumControl.Workflows.load_optimization
— FunctionLoad a previously stored optimization.
result = load_optimization(file; verbose=false, kwargs...)
recovers a result
previously stored by @optimize_or_load
or save_optimization
.
result, metadata = load_optimization(file; return_metadata=true, kwargs...)
also obtains a metadata dict, see @optimize_or_load
. This dict maps string keys to values.
Calling load_optimization
with verbose=true
will @info
the metadata after loading the file
QuantumControl.Workflows.run_or_load
— FunctionRun some code and write the result to file, or load from the file if it exists.
data = run_or_load(
file;
save=(endswith(file, ".jld2") ? JLD2.save_object : FileIO.save),
load=(endswith(file, ".jld2") ? JLD2.load_object : FileIO.load),
force=false,
verbose=true,
kwargs...
) do
data = Dict() # ... # something that can be saved to / loaded from file
return data
end
runs the code in the block and stores data
in the given file
. If file
already exists, skip running the code and instead return the data in file
.
If force
is True
, run the code whether or not file
exists, potentially overwriting it.
With verbose=true
, information about the status of file
will be shown as @info
.
The data
returned by the code block must be compatible with the format of file
and the save
/load
functions. When using JLD2.save_object
and JLD2.load_object
, almost any data can be written, so this should be particularly safe. More generally, when using FileIO.save
and FileIO.load
, see the FileIO registry for details. A common examples would be a DataFrame
being written to a .csv
file.
See also
@optimize_or_load
— for wrapping aroundoptimize
DrWatson.@produce_or_load
— a similar but more opinionated function with automatic naming
QuantumControl.Workflows.save_optimization
— FunctionWrite an optimization result to file.
save_optimization(file, result; metadata=nothing)
writes the result obtained from a call to optimize
to the given file
in JLD2 format. If given, metadata
is a dict of additional data that will be stored with the result. The metadata
dict should use strings as keys.
See also
load_optimization
: Function to load a file produced by@optimize_or_load
orsave_optimization
@optimize_or_load
: Runoptimize
andsave_optimization
in one go.