Lines Matching refs:be
69 call-back routines will be used for evaluating the objective function,
86 `TaoSetType()` can be overridden at runtime by using an options
107 A TAO solver can be created by calling the
115 communicator indicates a collection of processors that will be used to
118 communicator `PETSC_COMM_SELF` can be used with no understanding of
119 MPI. Even parallel users need to be familiar with only the basic
131 can be used to set the algorithm TAO uses to solve the application. The
132 various types of TAO solvers and the flags that identify them will be
133 discussed in the following sections. The solution method should be
137 accordingly. The user must also be aware of the derivative information
148 Each TAO solver that has been created should also be destroyed by using
160 Additional options for the TAO solver can be set from the command
178 will be cloned by the solvers to create additional work space within the
181 inner products, and function evaluations. This vector can be passed to
190 calling the TAO solver. This vector will be used by the TAO solver to
192 can be retrieved from the application object by using the
217 may be specific to an application and necessary to evaluate the
218 objective, can be collected in a single structure and used as one of the
219 arguments in the routine. The address of this structure will be cast as
231 needed by the application then a NULL pointer can be used.
240 that perform these computations must be identified to the application
252 that identifies where the objective should be evaluated, and the fourth
257 This routine, and the application context, should be passed to the
268 it must be cast as a `(void*)` type. This pointer will be passed back
270 the objective. In this routine, the pointer can be cast back to the
290 be passed to the application object by using the
304 when both function and gradient information can be computed in the same
316 routine should be set with the call
322 where the arguments are the TAO application, the optional vector to be
344 second argument is the point at which the Hessian should be evaluated.
348 often needed. The fourth argument is the matrix that will be used for
349 preconditioning the linear system; in most cases, this matrix will be
362 be stored and the Mat object that will be used for the preconditioning
363 (they may be the same). The fourth argument is the function that
369 Finite-difference approximations can be used to compute the gradient and
384 respectively. They can be set by using `TaoSetGradient()` and
388 The efficiency of the finite-difference Hessian can be improved if the
390 PETSc `MatFDColoring` object, it can be applied to the
403 process can be initiated from the command line by using the special TAO
410 Hessian evaluation routine need not be conventional matrices; instead,
416 (`PCSHELL`). In other words, matrix-free methods cannot be used if a
417 direct solver is to be employed. Details about using matrix-free methods
438 bounds for each variable can be set with the
446 variable, the bound may be set to `PETSC_INFINITY` or `PETSC_NINFINITY`.
447 After the two bound vectors have been set, they may be accessed with the
451 variables, the user must be careful to select a solver that acknowledges
459 function of the optimization variables. These constraints can be imposed either
506 Inequality constraints are assumed to be formulated as $c_i(x) \geq 0$
513 bounds $c_l$ and $c_u$ to be set using the
519 Once the application and solver have been set up, the solver can be
554 tolerances, but they can be changed by using the routine
560 maximum number of iterations. These parameters can be set with the
563 evaluations can be set with the command
575 can be used. This routine will display to standard output the number of
577 to the solver. This same output can be produced by using the command
580 The progress of the optimization solver can be monitored with the
581 runtime option `-tao_monitor`. Although monitoring routines can be
587 infeasibility norm, and step length can be retrieved with the following
599 be found in the manual page for `TaoGetConvergedReason()`.
603 After exiting the `TaoSolve()` function, the solution and the gradient can be
611 Note that the `Vec` returned by `TaoGetSolution()` will be the
612 same vector passed to `TaoSetSolution()`. This information can be
661 be specified in a routine, written by the user, that evaluates
671 be evaluated. The third argument is the vector of function values
673 context. This routine and the user-defined context should be set in the
689 evaluation of the Jacobian of $g$ should be performed by calling
704 inverse matrix may be `PETSC_NULL`, in which case TAO will use a PETSc
706 routines should be registered with TAO by using the
718 argument is the matrix in which the Jacobian information can be stored.
719 For the state Jacobian, the third argument is the matrix that will be
723 method, but faster results may be obtained by manipulating the structure
740 For these problems, the objective function value should be computed as a
754 $J = \partial r(x) / \partial x$, should be computed with a
775 $C: \mathbb R^n \to \mathbb R^m$. These constraints should be
784 which the constraint function should be evaluated. The third argument is
788 This routine and the user-defined context must be registered with TAO by
799 be passed back to the user.
804 evaluation of the Jacobian of $C$ should be performed in a routine
815 matrix may be used in solving a system of linear equations, a
816 preconditioner for the matrix may be needed. The fourth argument is the
817 matrix that will be used for preconditioning the linear system; in most
818 cases, this matrix will be the same as the Hessian matrix. The fifth
822 This routine should be specified to TAO by using the
829 third arguments are the Mat objects in which the Jacobian will be stored
830 and the Mat object that will be used for the preconditioning (they may
831 be the same), respectively. The fourth argument is the function pointer;
833 matrix should be created in a way such that the product of it and the
834 variable vector can be stored in the constraint vector.
855 methods available in TAO for solving these problems can be classified
871 method is likely to perform best. The Nelder-Mead method should be used
874 Each solver has a set of options associated with it that can be set with
914 The Newton line search method can be selected by using the TAO solver
917 gradient evaluations should be performed simultaneously when using this
1113 generalized Lanczos method, this preconditioner must be symmetric and
1185 Hessian matrix will be positive-semidefinite; the perturbation will
1190 of equation, a trust-region radius needs to be initialized and updated.
1252 This algorithm will be deprecated in the next version and replaced by
1277 trust-region method can be set by using the TAO solver `tao_ntr`. The
1280 gradient evaluations should be performed separately when using this
1459 must be symmetric and positive definite. The available options are to
1510 This algorithm will be deprecated in the next version and replaced by
1523 This algorithm will be deprecated in the next version and replaced by
1539 BFGS update formula. The inverse of $H_k$ can readily be applied
1551 default unconstrained minimization solver and can be selected by using
1553 evaluations should be performed simultaneously when using this
1559 options can be configured using the `-tao_lmvm_mat_lmvm` prefix. For
1567 approximation. The provided $H_{0,k}$ must be a PETSc `Mat` type
1575 can be used to prevent resetting the LMVM approximation between
1580 This algorithm will be deprecated in the next version and replaced by
1586 The nonlinear conjugate gradient method can be viewed as an extension of
1591 conjugate gradient method can be selected by using the TAO solver
1593 should be performed simultaneously when using this algorithm.
1598 Dai-Yuan method. These conjugate gradient methods can be specified by
1608 restarts using the gradient direction. The parameter $\eta$ can be
1612 This algorithm will be deprecated in the next version and replaced by
1624 be calculated. The downside is that this algorithm can be slow to
1637 where $\mu$ can be one of
1642 sufficiently small. Because of the way new vectors can be added to the
1643 sorted set, the minimum function value and/or the residual may not be
1646 Two options can be set specifically for the Nelder-Mead algorithm:
1675 can be set to `PETSC_INFINITY` for the upper bound and
1743 be adjusted using `-tao_bnk_as_tol` and `-tao_bnk_as_step` flags,
1744 respectively. The active-set estimation can be disabled using the option
1753 type can be changed using the `-tao_bnk_pc_type`
1758 safeguarded to be positive. `icc` and `ilu` options produce good
1770 the algorithms do not take any BNCG steps. This can be changed using the
1774 in the BNCG solver. However, it may be useful for certain types of
1865 methods. These methods can be specified by using the command line
1886 that may be done by using a negative value of $\xi$; this achieves
1896 All methods can be scaled using the parameter `-tao_bncg_alpha`, which
1913 and should be $\in [0, 1]$. One can disable rescaling of the
1924 {any}`sec_tao_bnk`, which can be deactivated using
1928 tolerance and estimator step length used in the Bertsekas method can be
1997 for $z^{k+1}$, which is soft-threshold. It can be used with either
2010 matrices can either be constant or non-constant, of which fact can be
2016 This issue can be prevented by `TaoADMMSetMinimumSpectralPenalty()`.
2058 perform better. However, the PHR formulation may be desirable for
2134 interior-point methods such as PDIPM, the Hessian matrix tends to be
2368 cannot be performed.
2381 The nonlinear equations $F$ should be specified with the function
2401 should be provided to Tao using `TaoSetJacobianResidual()` routine.
2421 The regularizer weight can be controlled with either
2423 command line option, while the smooth approximation parameter can be set
2427 dictionary is provided, the dictionary is assumed to be an identity
2430 The regularization selection can be made using the command line option
2433 regularization term. This custom term can be defined by using the
2451 applied to a practical least-squares problem can be found in
2462 point $x_+$ to be evaluated is obtained by solving the
2476 bound-constrained quadratic program, it may not be convex and the BQPIP
2478 Newton-Krylov Method should be used; the default is the BNTR
2567 POUNDERS supports the following parameters that can be set from the
2581 (`npmax`$=2n+1$) be used by others.
2593 the subproblem solver can be accessed using the command line options
2608 information is not available for scaling purposes, it can be useful to
2618 gradient and used only when the model gradient is deemed to be a
2659 is at the lower bound, then the function must be increasing and
2661 function must be decreasing and $\nabla f \leq 0$. If the solution
2662 is strictly between the bounds, we must be at a stationary point and
2668 Evaluation routines for $F$ and its Jacobian must be supplied
2670 variables must also be provided. If no starting point is supplied, a
2742 termed an infeasible semismooth method. This method can be specified by
2751 Both $\delta > 0$ and $\rho > 2$ can be modified by using
2757 a projected Armijo line search. This method can be specified by using
2761 $\delta > 0$ and $\rho > 2$ can be modified by using the
2782 termed an infeasible active-set semismooth method. This method can be
2786 a projected Armijo line search. This method can be specified by using
2812 options also apply to GPCG. It can be set by using the TAO solver
2820 quadratic optimization. It can be set by using the TAO solver of
2824 of systems of linear equations, whose solver can be accessed and
2868 radius can be set by using the command
2870 can be found by using the command `TaoGetCurrentTrustRegionRadius()`.
2872 for the algorithm and should be tuned and adjusted for optimal
2875 This algorithm will be deprecated in the next version in favor of the
2885 eliminating the need for Hessian evaluations. The method can be set by
2890 This algorithm will be deprecated in the next version in favor of the
2905 performance of the linear solver may be critical to an efficient
2916 KSP options in PETSc can be found in the {doc}`/manual/index`.
2933 routine for any application-specific computations that should be done
2940 Convergence of a solver can be defined in many ways. The methods TAO
2953 Within this routine, the solver can be queried for the solution vector,
2967 be set by using the routine
3001 `TaoSolve()` call to hot-start the new solution. This can be enabled
3044 can be used by the solver. These objects are standard mathematical
3046 may be distributed over multiple processors, restricted to a single
3070 application programmers, but may be necessary for solver implementations
3075 TAO solvers must be written in C or C++ and include several routines
3079 routines may be written to set options within the solver, view the
3167 vectors and a scalar that will be needed in the algorithm.
3171 routine `TaoComputeObjectiveAndGradient()`. Other routines may be used
3198 constraints or violation of bounds. This number should be zero for
3208 inner product. A full list of these methods can be found in the manual
3222 details on line searches can be found in
3230 can be found in the PETSc users manual.
3238 conjugate gradient algorithm shown above can be implemented as follows.
3275 pointer to this data structure (`tao->data`) so it can be accessed by
3280 creates a particular line search. These defaults could be specified in
3319 `tao->gradient`, etc.) will be destroyed by TAO immediately after the
3347 The SetFromOptions routine should be used to check for any
3348 algorithm-specific options set by the user and will be called when the
3366 The View routine should be used to output any algorithm-specific
3367 information or statistics at the end of a solve. This routine will be
3403 dynamic loading, then the fourth argument will be ignored.
3405 Once the solver has been registered, the new solver can be selected