API

GRAPE.gradientMethod

The gradient in the current iteration.

g = gradient(wrk; which=:initial)

returns the gradient associated with the guess pulse of the current iteration. Up to quasi-Newton corrections, the negative gradient determines the search_direction for the pulse_update.

g = gradient(wrk; which=:final)

returns the gradient associated with the optimized pulse of the current iteration.

source
GRAPE.print_tableMethod

Print optimization progress as a table.

This functions serves as the default info_hook for an optimization with GRAPE.

source
GRAPE.pulse_updateMethod

The vector of pulse update values for the current iteration.

Δu = pulse_update(wrk)

returns a vector conntaining the different between the optimized pulse values and the guess pulse values of the current iteration. This should be proportional to search_direction with the proportionality factor step_width.

source
GRAPE.search_directionMethod

The search direction used in the current iteration.

s = search_direction(wrk)

returns the vector describing the search direction used in the current iteration. This should be proportional to pulse_update with the proportionality factor step_width.

source
GRAPE.step_widthMethod

The step width used in the current iteration.

α = step_width(wrk)

returns the scalar α so that pulse_update(wrk) = α * search_direction(wrk), see pulse_update and search_direction for the iteration desribed by the current GrapeWrk (for the state of wrk as available in the info_hook of the current iteration.

source
GRAPE.vec_angleMethod

The angle between two vectors.

ϕ = vec_angle(v1, v2; unit=:rad)

returns the angle between two vectors in radians (or degrees, with unit=:degree).

source
QuantumControlBase.optimizeMethod
using GRAPE
result = optimize(problem; method=GRAPE, kwargs...)

optimizes the given control problem via the GRAPE method, by minimizing the functional

\[J(\{ϵ_{nl}\}) = J_T(\{|ϕ_k(T)⟩\}) + λ_a J_a(\{ϵ_{nl}\})\]

where the final time functional $J_T$ depends explicitly on the forward-propagated states and the running cost $J_a$ depends explicitly on pulse values $ϵ_{nl}$ of the l'th control discretized on the n'th interval of the time grid.

Returns a GrapeResult.

Keyword arguments that control the optimization are taken from the keyword arguments used in the instantiation of problem; any of these can be overridden with explicit keyword arguments to optimize.

Required problem keyword arguments

  • J_T: A function J_T(ϕ, trajectories; τ=τ) that evaluates the final time functional from a vector ϕ of forward-propagated states and problem.trajectories. For all trajectories that define a target_state, the element τₖ of the vector τ will contain the overlap of the state ϕₖ with the target_state of the k'th trajectory, or NaN otherwise.

Optional problem keyword arguments

  • chi: A function chi!(χ, ϕ, trajectories) what receives a list ϕ of the forward propagated states and must set $|χₖ⟩ = -∂J_T/∂⟨ϕₖ|$. If not given, it will be automatically determined from J_T via make_chi with the default parameters.

  • J_a: A function J_a(pulsevals, tlist) that evaluates running costs over the pulse values, where pulsevals are the vectorized values $ϵ_{nl}$, where n are in indices of the time intervals and l are the indices over the controls, i.e., [ϵ₁₁, ϵ₂₁, …, ϵ₁₂, ϵ₂₂, …] (the pulse values for each control are contiguous). If not given, the optimization will not include a running cost.

  • gradient_method=:gradgen: One of :gradgen (default) or :taylor. With gradient_method=:gradgen, the gradient is calculated using QuantumGradientGenerators. With gradient_method=:taylor, it is evaluated via a Taylor series, see Eq. (20) in Kuprov and Rogers, J. Chem. Phys. 131, 234108 (2009) [4].

  • taylor_grad_max_order=100: If given with gradient_method=:taylor, the maximum number of terms in the Taylor series. If taylor_grad_check_convergence=true (default), if the Taylor series does not convergence within the given number of terms, throw an an error. With taylor_grad_check_convergence=true, this is the exact order of the Taylor series.

  • taylor_grad_tolerance=1e-16: If given with gradient_method=:taylor and taylor_grad_check_convergence=true, stop the Taylor series when the norm of the term falls below the given tolerance. Ignored if taylor_grad_check_convergence=false.

  • taylor_grad_check_convergence=true: If given as true (default), check the convergence after each term in the Taylor series an stop as soon as the norm of the term drops below the given number. If false, stop after exactly taylor_grad_max_order terms.

  • lambda_a=1: A weight for the running cost J_a.

  • grad_J_a: A function to calculate the gradient of J_a. If not given, it will be automatically determined.

  • upper_bound: An upper bound for the value of any optimized control. Time-dependent upper bounds can be specified via pulse_options.

  • lower_bound: A lower bound for the value of any optimized control. Time-dependent lower bounds can be specified via pulse_options.

  • pulse_options: A dictionary that maps every control (as obtained by get_controls from the problem.trajectories) to a dict with the following possible keys:

    • :upper_bounds: A vector of upper bound values, one for each intervals of the time grid. Values of Inf indicate an unconstrained upper bound for that time interval, respectively the global upper_bound, if given.
    • :lower_bounds: A vector of lower bound values. Values of -Inf indicate an unconstrained lower bound for that time interval,
  • update_hook: Not implemented

  • info_hook: A function (or tuple of functions) that receives the same arguments as update_hook, in order to write information about the current iteration to the screen or to a file. The default info_hook prints a table with convergence information to the screen. Runs after update_hook. The info_hook function may return a tuple, which is stored in the list of records inside the GrapeResult object.

  • check_convergence: A function to check whether convergence has been reached. Receives a GrapeResult object result, and should set result.converged to true and result.message to an appropriate string in case of convergence. Multiple convergence checks can be performed by chaining functions with . The convergence check is performed after any calls to update_hook and info_hook.

  • x_tol: Parameter for Optim.jl

  • f_tol: Parameter for Optim.jl

  • g_tol: Parameter for Optim.jl

  • show_trace: Parameter for Optim.jl

  • extended_trace: Parameter for Optim.jl

  • show_every: Parameter for Optim.jl

  • allow_f_increases: Parameter for Optim.jl

  • optimizer: An optional Optim.jl optimizer (Optim.AbstractOptimizer instance). If not given, an L-BFGS-B optimizer will be used.

  • prop_method: The propagation method to use for each trajectory, see below.

  • verbose=false: If true, print information during initialization

Trajectory propagation

GRAPE may involve three types of propagation:

  • A forward propagation for every Trajectory in the problem
  • A backward propagation for every trajectory
  • A backward propagation of a gradient generator for every trajectory.

The keyword arguments for each propagation (see propagate) are determined from any properties of each Trajectory that have a prop_ prefix, cf. init_prop_trajectory.

In situations where different parameters are required for the forward and backward propagation, instead of the prop_ prefix, the fw_prop_ and bw_prop_ prefix can be used, respectively. These override any setting with the prop_ prefix. Similarly, properties for the backward propagation of the gradient generators can be set with properties that have a grad_prop_ prefix. These prefixes apply both to the properties of each Trajectory and the problem keyword arguments.

Note that the propagation method for each propagation must be specified. In most cases, it is sufficient (and recommended) to pass a global prop_method problem keyword argument.

source