Lines Matching refs:objective

69 call-back routines will be used for evaluating the objective function,
116 evaluate the objective function, compute constraints, and provide
215 For example, a routine that evaluates an objective function may need
218 objective, can be collected in a single structure and used as one of the
237 TAO solvers that minimize an objective function require the application
238 to evaluate the objective function. Some solvers may also require the
239 application to evaluate derivatives of the objective function. Routines
249 in order to evaluate an objective function
252 that identifies where the objective should be evaluated, and the fourth
254 argument to return the objective value evaluated at the point specified
266 the objective, and the third argument is the pointer to an appropriate
270 the objective. In this routine, the pointer can be cast back to the
275 The gradient of the objective function is specified in a similar manner.
286 routine for evaluating the objective function. The numbers in the
288 should represent the gradient of the objective at the specified point at
301 Instead of evaluating the objective and its gradient in separate
370 the Hessian of an objective function. These approximations will slow the
553 desired in the objective function. Each solver sets its own convergence
586 current iteration number, objective function value, gradient norm,
636 the design variable $v$, and $f$ is an objective function.
641 We make two main assumptions when solving these problems: the objective
740 For these problems, the objective function value should be computed as a
895 objective function at $x_k$ and $g_k$ is the gradient of the
896 objective function at $x_k$. For problems where the Hessian matrix
1143 where $g(x_k)$ is the gradient of the objective function and
1162 where $g(x_k)$ is the gradient of the objective function and
1176 where $g(x_k)$ is the gradient of the objective function,
1201 in the nonlinear function. The iterate obtaining the best objective
1230 objective function to the reduction predicted by the quadratic model for
1269 the objective function at $x_k$, $g_k$ is the gradient of
1270 the objective function at $x_k$, and $\Delta_k$ is the
1272 nonlinear objective function, then the step is accepted, and the
1274 sufficiently reduce the nonlinear objective function, then the step is
1476 in the nonlinear function. The iterate obtaining the best objective
1488 ratio of the actual reduction in the objective function to the reduction
1629 ${x_1,x_2,\ldots,x_{N+1}}$ and their corresponding objective
1671 These solvers use the bounds on the variables as well as objective
1776 expensive than the objective function or its gradient.
2082 Here, $f(x)$ is the nonlinear objective function, $g(x)$,
2152 the design variable $v$, and $f$ is an objective function.
2157 We make two main assumptions when solving these problems: the objective
2398 words, the Gauss-Newton method approximates the Hessian of the objective
2399 as $H_k \approx (J_k^T J_k)$ and the gradient of the objective as
2604 storing the separable objective function, and a routine for evaluating
2616 the gradient of the objective function is not available. Therefore, for
2619 reasonable approximation of the gradient of the objective. In practice,
2657 $f: \mathbb R^n \to \mathbb R$ is the objective function. In a
2802 where the gradient and the Hessian of the objective are both constant.
2808 it assumes that the objective function is quadratic and convex.
2810 Since the objective function is quadratic, the algorithm does not use a
2822 assumes the objective function is quadratic, it evaluates the function,
2832 unconstrained objective in the form of
2843 an unconstrained objective in the form of
2856 conjugate gradient method to minimize an objective function. Each
3106 the objective function and constraints, pointers to the variable vector
3220 passes the current solution, gradient, and objective value to the line
3221 search and returns a new solution, gradient, and objective value. More