xref: /petsc/src/ksp/ksp/tutorials/output/ex56_latebs-2.out (revision 70646cd191a02c3aba559ba717dac5da7a8a1e20)
1  0 KSP Residual norm 811.998
2  1 KSP Residual norm 197.037
3  2 KSP Residual norm 76.0612
4  3 KSP Residual norm 28.3601
5  4 KSP Residual norm 7.64702
6  5 KSP Residual norm 4.00353
7  6 KSP Residual norm 1.74934
8  7 KSP Residual norm 0.751483
9  8 KSP Residual norm 0.28333
10  9 KSP Residual norm 0.0874762
11 10 KSP Residual norm 0.0353676
12 11 KSP Residual norm 0.017824
13 12 KSP Residual norm 0.00703599
14  Linear solve converged due to CONVERGED_RTOL iterations 12
15KSP Object: 8 MPI processes
16  type: cg
17  maximum iterations=10000, initial guess is zero
18  tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
19  left preconditioning
20  using PRECONDITIONED norm type for convergence test
21PC Object: 8 MPI processes
22  type: gamg
23    type is MULTIPLICATIVE, levels=2 cycles=v
24      Cycles per PCApply=1
25      Using externally compute Galerkin coarse grid matrices
26      GAMG specific options
27        Threshold for dropping small values in graph on each level =   -0.01   -0.01
28        Threshold scaling factor for each level not specified = 1.
29        Using parallel coarse grid solver (all coarse grid equations not put on one process)
30        AGG specific options
31          Number of levels of aggressive coarsening 1
32          Square graph aggressive coarsening
33          MatCoarsen Object: (pc_gamg_) 8 MPI processes
34            type: mis
35          Number smoothing steps to construct prolongation 1
36        Complexity:    grid = 1.054    operator = 1.07125
37        Per-level complexity: op = operator, int = interpolation
38            #equations  | #active PEs | avg nnz/row op | avg nnz/row int
39                162            4             87                0
40               3000            8             66               19
41  Coarse grid solver -- level 0 -------------------------------
42    KSP Object: (mg_coarse_) 8 MPI processes
43      type: cg
44      maximum iterations=10000, initial guess is zero
45      tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
46      left preconditioning
47      using PRECONDITIONED norm type for convergence test
48    PC Object: (mg_coarse_) 8 MPI processes
49      type: jacobi
50        type DIAGONAL
51      linear system matrix, which is also used to construct the preconditioner:
52      Mat Object: 8 MPI processes
53        type: mpiaij
54        rows=162, cols=162, bs=6
55        total: nonzeros=14076, allocated nonzeros=14076
56        total number of mallocs used during MatSetValues calls=0
57          using nonscalable MatPtAP() implementation
58          using I-node (on process 0) routines: found 4 nodes, limit used is 5
59  Down solver (pre-smoother) on level 1 -------------------------------
60    KSP Object: (mg_levels_1_) 8 MPI processes
61      type: chebyshev
62        Chebyshev polynomial of first kind
63        eigenvalue targets used: min 0.637067, max 3.3446
64        eigenvalues provided (min 0.0597913, max 3.18533) with transform: [0. 0.2; 0. 1.05]
65      maximum iterations=2, nonzero initial guess
66      tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
67      left preconditioning
68      not checking for convergence
69    PC Object: (mg_levels_1_) 8 MPI processes
70      type: jacobi
71        type DIAGONAL
72      linear system matrix, which is also used to construct the preconditioner:
73      Mat Object: 8 MPI processes
74        type: mpiaij
75        rows=3000, cols=3000, bs=3
76        total: nonzeros=197568, allocated nonzeros=243000
77        total number of mallocs used during MatSetValues calls=0
78          has attached near null space
79          using I-node (on process 0) routines: found 125 nodes, limit used is 5
80  Up solver (post-smoother) same as down solver (pre-smoother)
81  linear system matrix, which is also used to construct the preconditioner:
82  Mat Object: 8 MPI processes
83    type: mpiaij
84    rows=3000, cols=3000, bs=3
85    total: nonzeros=197568, allocated nonzeros=243000
86    total number of mallocs used during MatSetValues calls=0
87      has attached near null space
88      using I-node (on process 0) routines: found 125 nodes, limit used is 5
89  0 KSP Residual norm 0.00811969
90  1 KSP Residual norm 0.00196934
91  2 KSP Residual norm 0.000759615
92  3 KSP Residual norm 0.000282977
93  4 KSP Residual norm 7.65127e-05
94  5 KSP Residual norm 4.02809e-05
95  6 KSP Residual norm 1.76022e-05
96  7 KSP Residual norm 7.54699e-06
97  8 KSP Residual norm 2.84038e-06
98  9 KSP Residual norm 8.7449e-07
99 10 KSP Residual norm 3.53116e-07
100 11 KSP Residual norm 1.7785e-07
101 12 KSP Residual norm 7.0347e-08
102  Linear solve converged due to CONVERGED_RTOL iterations 12
103KSP Object: 8 MPI processes
104  type: cg
105  maximum iterations=10000, initial guess is zero
106  tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
107  left preconditioning
108  using PRECONDITIONED norm type for convergence test
109PC Object: 8 MPI processes
110  type: gamg
111    type is MULTIPLICATIVE, levels=2 cycles=v
112      Cycles per PCApply=1
113      Using externally compute Galerkin coarse grid matrices
114      GAMG specific options
115        Threshold for dropping small values in graph on each level =   -0.01   -0.01
116        Threshold scaling factor for each level not specified = 1.
117        Using parallel coarse grid solver (all coarse grid equations not put on one process)
118        AGG specific options
119          Number of levels of aggressive coarsening 1
120          Square graph aggressive coarsening
121          MatCoarsen Object: (pc_gamg_) 8 MPI processes
122            type: mis
123          Number smoothing steps to construct prolongation 1
124        Complexity:    grid = 1.054    operator = 1.07125
125        Per-level complexity: op = operator, int = interpolation
126            #equations  | #active PEs | avg nnz/row op | avg nnz/row int
127                162            4             87                0
128               3000            8             66               19
129  Coarse grid solver -- level 0 -------------------------------
130    KSP Object: (mg_coarse_) 8 MPI processes
131      type: cg
132      maximum iterations=10000, initial guess is zero
133      tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
134      left preconditioning
135      using PRECONDITIONED norm type for convergence test
136    PC Object: (mg_coarse_) 8 MPI processes
137      type: jacobi
138        type DIAGONAL
139      linear system matrix, which is also used to construct the preconditioner:
140      Mat Object: 8 MPI processes
141        type: mpiaij
142        rows=162, cols=162, bs=6
143        total: nonzeros=14076, allocated nonzeros=14076
144        total number of mallocs used during MatSetValues calls=0
145          using nonscalable MatPtAP() implementation
146          using I-node (on process 0) routines: found 4 nodes, limit used is 5
147  Down solver (pre-smoother) on level 1 -------------------------------
148    KSP Object: (mg_levels_1_) 8 MPI processes
149      type: chebyshev
150        Chebyshev polynomial of first kind
151        eigenvalue targets used: min 0.637376, max 3.34622
152        eigenvalues estimated via gmres: min 0.0806313, max 3.18688
153        eigenvalues estimated using gmres with transform: [0. 0.2; 0. 1.05]
154        KSP Object: (mg_levels_1_esteig_) 8 MPI processes
155          type: gmres
156            restart=30, using classical (unmodified) Gram-Schmidt orthogonalization with no iterative refinement
157            happy breakdown tolerance=1e-30
158          maximum iterations=10, initial guess is zero
159          tolerances: relative=1e-12, absolute=1e-50, divergence=10000.
160          left preconditioning
161          using PRECONDITIONED norm type for convergence test
162        estimating eigenvalues using a noisy random number generated right-hand side
163      maximum iterations=2, nonzero initial guess
164      tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
165      left preconditioning
166      not checking for convergence
167    PC Object: (mg_levels_1_) 8 MPI processes
168      type: jacobi
169        type DIAGONAL
170      linear system matrix, which is also used to construct the preconditioner:
171      Mat Object: 8 MPI processes
172        type: mpiaij
173        rows=3000, cols=3000, bs=3
174        total: nonzeros=197568, allocated nonzeros=243000
175        total number of mallocs used during MatSetValues calls=0
176          has attached near null space
177          using I-node (on process 0) routines: found 125 nodes, limit used is 5
178  Up solver (post-smoother) same as down solver (pre-smoother)
179  linear system matrix, which is also used to construct the preconditioner:
180  Mat Object: 8 MPI processes
181    type: mpiaij
182    rows=3000, cols=3000, bs=3
183    total: nonzeros=197568, allocated nonzeros=243000
184    total number of mallocs used during MatSetValues calls=0
185      has attached near null space
186      using I-node (on process 0) routines: found 125 nodes, limit used is 5
187  0 KSP Residual norm 0.00811969
188  1 KSP Residual norm 0.00196934
189  2 KSP Residual norm 0.000759615
190  3 KSP Residual norm 0.000282977
191  4 KSP Residual norm 7.65127e-05
192  5 KSP Residual norm 4.02809e-05
193  6 KSP Residual norm 1.76022e-05
194  7 KSP Residual norm 7.54699e-06
195  8 KSP Residual norm 2.84038e-06
196  9 KSP Residual norm 8.7449e-07
197 10 KSP Residual norm 3.53116e-07
198 11 KSP Residual norm 1.7785e-07
199 12 KSP Residual norm 7.0347e-08
200  Linear solve converged due to CONVERGED_RTOL iterations 12
201KSP Object: 8 MPI processes
202  type: cg
203  maximum iterations=10000, initial guess is zero
204  tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
205  left preconditioning
206  using PRECONDITIONED norm type for convergence test
207PC Object: 8 MPI processes
208  type: gamg
209    type is MULTIPLICATIVE, levels=2 cycles=v
210      Cycles per PCApply=1
211      Using externally compute Galerkin coarse grid matrices
212      GAMG specific options
213        Threshold for dropping small values in graph on each level =   -0.01   -0.01
214        Threshold scaling factor for each level not specified = 1.
215        Using parallel coarse grid solver (all coarse grid equations not put on one process)
216        AGG specific options
217          Number of levels of aggressive coarsening 1
218          Square graph aggressive coarsening
219          MatCoarsen Object: (pc_gamg_) 8 MPI processes
220            type: mis
221          Number smoothing steps to construct prolongation 1
222        Complexity:    grid = 1.054    operator = 1.07125
223        Per-level complexity: op = operator, int = interpolation
224            #equations  | #active PEs | avg nnz/row op | avg nnz/row int
225                162            4             87                0
226               3000            8             66               19
227  Coarse grid solver -- level 0 -------------------------------
228    KSP Object: (mg_coarse_) 8 MPI processes
229      type: cg
230      maximum iterations=10000, initial guess is zero
231      tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
232      left preconditioning
233      using PRECONDITIONED norm type for convergence test
234    PC Object: (mg_coarse_) 8 MPI processes
235      type: jacobi
236        type DIAGONAL
237      linear system matrix, which is also used to construct the preconditioner:
238      Mat Object: 8 MPI processes
239        type: mpiaij
240        rows=162, cols=162, bs=6
241        total: nonzeros=14076, allocated nonzeros=14076
242        total number of mallocs used during MatSetValues calls=0
243          using nonscalable MatPtAP() implementation
244          using I-node (on process 0) routines: found 4 nodes, limit used is 5
245  Down solver (pre-smoother) on level 1 -------------------------------
246    KSP Object: (mg_levels_1_) 8 MPI processes
247      type: chebyshev
248        Chebyshev polynomial of first kind
249        eigenvalue targets used: min 0.637376, max 3.34622
250        eigenvalues estimated via gmres: min 0.0806313, max 3.18688
251        eigenvalues estimated using gmres with transform: [0. 0.2; 0. 1.05]
252        KSP Object: (mg_levels_1_esteig_) 8 MPI processes
253          type: gmres
254            restart=30, using classical (unmodified) Gram-Schmidt orthogonalization with no iterative refinement
255            happy breakdown tolerance=1e-30
256          maximum iterations=10, initial guess is zero
257          tolerances: relative=1e-12, absolute=1e-50, divergence=10000.
258          left preconditioning
259          using PRECONDITIONED norm type for convergence test
260        estimating eigenvalues using a noisy random number generated right-hand side
261      maximum iterations=2, nonzero initial guess
262      tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
263      left preconditioning
264      not checking for convergence
265    PC Object: (mg_levels_1_) 8 MPI processes
266      type: jacobi
267        type DIAGONAL
268      linear system matrix, which is also used to construct the preconditioner:
269      Mat Object: 8 MPI processes
270        type: mpiaij
271        rows=3000, cols=3000, bs=3
272        total: nonzeros=197568, allocated nonzeros=243000
273        total number of mallocs used during MatSetValues calls=0
274          has attached near null space
275          using I-node (on process 0) routines: found 125 nodes, limit used is 5
276  Up solver (post-smoother) same as down solver (pre-smoother)
277  linear system matrix, which is also used to construct the preconditioner:
278  Mat Object: 8 MPI processes
279    type: mpiaij
280    rows=3000, cols=3000, bs=3
281    total: nonzeros=197568, allocated nonzeros=243000
282    total number of mallocs used during MatSetValues calls=0
283      has attached near null space
284      using I-node (on process 0) routines: found 125 nodes, limit used is 5
285[0]main |b-Ax|/|b|=2.425235e-04, |b|=5.391826e+00, emax=9.946388e-01
286