Lines Matching refs:is

4 This document is a README file for coding parallel AMG for Pressure
9 This document is intended to serve as extra comments for the source, not
13 you continue. row-wise sparse storage is used everywhere in this
27 smartAMG, which is never used.
32 to do the actual preconditioning. This is the interface between our AMG
43 We have to start from this file. This is a module file containing
49 dimentions. This is used, as shown in this file later, in submatrices
55 amg_ppeDiag is important diagonal scaling matrices for different levels.
60 The purpose of this file is: 1) CF-spliting, 2) setup interpolation
63 CF-spliting is done in a seperate function ramg_CFsplit. Other than the
64 thesis, one thing to note is that we use heap sort for lambda array. The
65 good thing is we keep changing this array and everytime we want to get
66 the highest value out from the array. So heap sort is a natural choice.
69 ramg_readin_cfmap. This is to ensure that shared boundary nodes get same
74 CF_map is an array to set the order for all the nodes according to
79 amg_paramap is an array to record "grouping" information for coarsening
80 by group in parallel. For each node, there is a "group id" assigned to
81 it. For all interior node, the "group id" is the rank of the local
82 processor. For a boundary node, if it is a "master", the rank of it is
83 the neighbour rank; if it is a "slave", the rank of it is the negative
85 information is computed at very beginning for the finest level. Then
86 after each coarsening, the information is carried on to higher level in
88 Also, CF-spliting is done within each group, if you pay attention to
92 Interpolation operator is a sparse matrix, but it's hard to build it at
93 once. amg_I is used then. It's an array of allocatable pointers. So we
94 can bulid row by row. Eventually construct it to form I_cf_* which is
101 In ramg_prep, please note that variable maxstopsign is the control
102 variable for coarsening. Coarsening is stopped if this value is true.
104 coarsening is going on. For those already got coarsened and have true
110 direct solve for coarsest level is selected. Thus, each iteration will
123 The main wrap, nothing fancy. ramg_interface is the only gate to the
126 is never used.
134 incorporated with interpolation operator, i.e. interpolation is modified
135 to directly give a scaled matrix. Another place is the allocation of
152 Now to parallel. lhsGP* is a sparse-communicated duplicate of lhsP
157 Must read the section of GGB in Chun Sun's thesis. ARPACK/PARPACK is
165 In ggb_setup, there is a Allreduce with MPI_MAX, this is to search for a
201 smoothers. ramg_chebyratio is an input coefficient to determine the
202 smallest eigenvalue you want to smooth using chebyshev polynomial, it is
210 global_lhs is a function that reads in lhsP (in sparse format so there
215 alter the structure of lhsP too. This is done task by task. There are
217 leave this subroutine as is. If you really want to do something, read
218 the source very carefully. One thing about this, is that we are not
222 which won't be a big issue for the overall performance. It is a direct
231 matrices. Everything is similar to plain CG, but smaller in size because
247 Only thing to note is in Gauss-Seidel, we follow red-black smoothing,
250 preconditioner. i.e. preconditioner is not the same solve by solve,
261 calcAv_g is the global Ap-product for all AMG-matrices, the kernel is