xref: /petsc/src/snes/tutorials/output/ex5_mis_view_detailed.out (revision 7e1a0bbe36d2be40a00a95404ece00db4857f70d)
1KSP Object: 1 MPI process
2  type: gmres
3    restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
4    happy breakdown tolerance 1e-30
5  maximum iterations=10000, initial guess is zero
6  tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
7  left preconditioning
8  using PRECONDITIONED norm type for convergence test
9PC Object: 1 MPI process
10  type: gamg
11    type is MULTIPLICATIVE, levels=2 cycles=v
12      Cycles per PCApply=1
13      Using externally compute Galerkin coarse grid matrices
14      GAMG specific options
15        Threshold for dropping small values in graph on each level =   -1.   -1.
16        Threshold scaling factor for each level not specified = 1.
17        AGG specific options
18          Number of levels of aggressive coarsening 1
19          Square graph aggressive coarsening
20          MatCoarsen Object: (pc_gamg_) 1 MPI process
21            type: mis
22              MIS aggregator lists are not available
23          Number smoothing steps to construct prolongation 1
24        Complexity:    grid = 1.1875    operator = 1.14062
25        Per-level complexity: op = operator, int = interpolation
26            #equations  | #active PEs | avg nnz/row op | avg nnz/row int
27                  3            1              3                0
28                 16            1              4                2
29  Coarse grid solver -- level 0 -------------------------------
30    KSP Object: (mg_coarse_) 1 MPI process
31      type: preonly
32      maximum iterations=10000, initial guess is zero
33      tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
34      left preconditioning
35      not checking for convergence
36    PC Object: (mg_coarse_) 1 MPI process
37      type: bjacobi
38        number of blocks = 1
39        Local solver information for each block is in the following KSP and PC objects:
40        [0] number of local blocks = 1, first local block number = 0
41        [0] local block number 0
42        KSP Object: (mg_coarse_sub_) 1 MPI process
43          type: preonly
44          maximum iterations=1, initial guess is zero
45          tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
46          left preconditioning
47          not checking for convergence
48        PC Object: (mg_coarse_sub_) 1 MPI process
49          type: lu
50            out-of-place factorization
51            tolerance for zero pivot 2.22045e-14
52            using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
53            matrix ordering: nd
54            factor fill ratio given 5., needed 1.
55              Factored matrix follows:
56                Mat Object: (mg_coarse_sub_) 1 MPI process
57                  type: seqaij
58                  rows=3, cols=3
59                  package used to perform factorization: petsc
60                  total: nonzeros=9, allocated nonzeros=9
61                    using I-node routines: found 1 nodes, limit used is 5
62          linear system matrix = precond matrix:
63          Mat Object: (mg_coarse_sub_) 1 MPI process
64            type: seqaij
65            rows=3, cols=3
66            total: nonzeros=9, allocated nonzeros=9
67            total number of mallocs used during MatSetValues calls=0
68              using I-node routines: found 1 nodes, limit used is 5
69        - - - - - - - - - - - - - - - - - -
70      linear system matrix = precond matrix:
71      Mat Object: (mg_coarse_sub_) 1 MPI process
72        type: seqaij
73        rows=3, cols=3
74        total: nonzeros=9, allocated nonzeros=9
75        total number of mallocs used during MatSetValues calls=0
76          using I-node routines: found 1 nodes, limit used is 5
77  Down solver (pre-smoother) on level 1 -------------------------------
78    KSP Object: (mg_levels_1_) 1 MPI process
79      type: chebyshev
80        Chebyshev polynomial of first kind
81        eigenvalue targets used: min 0.514268, max 5.65695
82        eigenvalues provided (min 0.299461, max 5.14268) with transform: [0. 0.1; 0. 1.1]
83      maximum iterations=2, nonzero initial guess
84      tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
85      left preconditioning
86      not checking for convergence
87    PC Object: (mg_levels_1_) 1 MPI process
88      type: jacobi
89        type DIAGONAL
90      Vec Object: 1 MPI process
91        type: seq
92        length=16
93      linear system matrix = precond matrix:
94      Mat Object: 1 MPI process
95        type: seqaij
96        rows=16, cols=16
97        total: nonzeros=64, allocated nonzeros=64
98        total number of mallocs used during MatSetValues calls=0
99          not using I-node routines
100  Up solver (post-smoother) same as down solver (pre-smoother)
101  linear system matrix = precond matrix:
102  Mat Object: 1 MPI process
103    type: seqaij
104    rows=16, cols=16
105    total: nonzeros=64, allocated nonzeros=64
106    total number of mallocs used during MatSetValues calls=0
107      not using I-node routines
108KSP Object: 1 MPI process
109  type: gmres
110    restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
111    happy breakdown tolerance 1e-30
112  maximum iterations=10000, initial guess is zero
113  tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
114  left preconditioning
115  using PRECONDITIONED norm type for convergence test
116PC Object: 1 MPI process
117  type: gamg
118    type is MULTIPLICATIVE, levels=2 cycles=v
119      Cycles per PCApply=1
120      Using externally compute Galerkin coarse grid matrices
121      GAMG specific options
122        Threshold for dropping small values in graph on each level =   -1.   -1.
123        Threshold scaling factor for each level not specified = 1.
124        AGG specific options
125          Number of levels of aggressive coarsening 1
126          Square graph aggressive coarsening
127          MatCoarsen Object: (pc_gamg_) 1 MPI process
128            type: mis
129              MIS aggregator lists are not available
130          Number smoothing steps to construct prolongation 1
131        Complexity:    grid = 1.1875    operator = 1.14062
132        Per-level complexity: op = operator, int = interpolation
133            #equations  | #active PEs | avg nnz/row op | avg nnz/row int
134                  3            1              3                0
135                 16            1              4                2
136  Coarse grid solver -- level 0 -------------------------------
137    KSP Object: (mg_coarse_) 1 MPI process
138      type: preonly
139      maximum iterations=10000, initial guess is zero
140      tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
141      left preconditioning
142      not checking for convergence
143    PC Object: (mg_coarse_) 1 MPI process
144      type: bjacobi
145        number of blocks = 1
146        Local solver information for each block is in the following KSP and PC objects:
147        [0] number of local blocks = 1, first local block number = 0
148        [0] local block number 0
149        KSP Object: (mg_coarse_sub_) 1 MPI process
150          type: preonly
151          maximum iterations=1, initial guess is zero
152          tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
153          left preconditioning
154          not checking for convergence
155        PC Object: (mg_coarse_sub_) 1 MPI process
156          type: lu
157            out-of-place factorization
158            tolerance for zero pivot 2.22045e-14
159            using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
160            matrix ordering: nd
161            factor fill ratio given 5., needed 1.
162              Factored matrix follows:
163                Mat Object: (mg_coarse_sub_) 1 MPI process
164                  type: seqaij
165                  rows=3, cols=3
166                  package used to perform factorization: petsc
167                  total: nonzeros=9, allocated nonzeros=9
168                    using I-node routines: found 1 nodes, limit used is 5
169          linear system matrix = precond matrix:
170          Mat Object: (mg_coarse_sub_) 1 MPI process
171            type: seqaij
172            rows=3, cols=3
173            total: nonzeros=9, allocated nonzeros=9
174            total number of mallocs used during MatSetValues calls=0
175              using I-node routines: found 1 nodes, limit used is 5
176        - - - - - - - - - - - - - - - - - -
177      linear system matrix = precond matrix:
178      Mat Object: (mg_coarse_sub_) 1 MPI process
179        type: seqaij
180        rows=3, cols=3
181        total: nonzeros=9, allocated nonzeros=9
182        total number of mallocs used during MatSetValues calls=0
183          using I-node routines: found 1 nodes, limit used is 5
184  Down solver (pre-smoother) on level 1 -------------------------------
185    KSP Object: (mg_levels_1_) 1 MPI process
186      type: chebyshev
187        Chebyshev polynomial of first kind
188        eigenvalue targets used: min 0.159372, max 1.75309
189        eigenvalues estimated via gmres: min 0.406283, max 1.59372
190        eigenvalues estimated using gmres with transform: [0. 0.1; 0. 1.1]
191        KSP Object: (mg_levels_1_esteig_) 1 MPI process
192          type: gmres
193            restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
194            happy breakdown tolerance 1e-30
195          maximum iterations=10, initial guess is zero
196          tolerances: relative=1e-12, absolute=1e-50, divergence=10000.
197          left preconditioning
198          using PRECONDITIONED norm type for convergence test
199        estimating eigenvalues using a noisy random number generated right-hand side
200      maximum iterations=2, nonzero initial guess
201      tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
202      left preconditioning
203      not checking for convergence
204    PC Object: (mg_levels_1_) 1 MPI process
205      type: jacobi
206        type DIAGONAL
207      Vec Object: 1 MPI process
208        type: seq
209        length=16
210      linear system matrix = precond matrix:
211      Mat Object: 1 MPI process
212        type: seqaij
213        rows=16, cols=16
214        total: nonzeros=64, allocated nonzeros=64
215        total number of mallocs used during MatSetValues calls=0
216          not using I-node routines
217  Up solver (post-smoother) same as down solver (pre-smoother)
218  linear system matrix = precond matrix:
219  Mat Object: 1 MPI process
220    type: seqaij
221    rows=16, cols=16
222    total: nonzeros=64, allocated nonzeros=64
223    total number of mallocs used during MatSetValues calls=0
224      not using I-node routines
225KSP Object: 1 MPI process
226  type: gmres
227    restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
228    happy breakdown tolerance 1e-30
229  maximum iterations=10000, initial guess is zero
230  tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
231  left preconditioning
232  using PRECONDITIONED norm type for convergence test
233PC Object: 1 MPI process
234  type: gamg
235    type is MULTIPLICATIVE, levels=2 cycles=v
236      Cycles per PCApply=1
237      Using externally compute Galerkin coarse grid matrices
238      GAMG specific options
239        Threshold for dropping small values in graph on each level =   -1.   -1.
240        Threshold scaling factor for each level not specified = 1.
241        AGG specific options
242          Number of levels of aggressive coarsening 1
243          Square graph aggressive coarsening
244          MatCoarsen Object: (pc_gamg_) 1 MPI process
245            type: mis
246              MIS aggregator lists are not available
247          Number smoothing steps to construct prolongation 1
248        Complexity:    grid = 1.1875    operator = 1.14062
249        Per-level complexity: op = operator, int = interpolation
250            #equations  | #active PEs | avg nnz/row op | avg nnz/row int
251                  3            1              3                0
252                 16            1              4                2
253  Coarse grid solver -- level 0 -------------------------------
254    KSP Object: (mg_coarse_) 1 MPI process
255      type: preonly
256      maximum iterations=10000, initial guess is zero
257      tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
258      left preconditioning
259      not checking for convergence
260    PC Object: (mg_coarse_) 1 MPI process
261      type: bjacobi
262        number of blocks = 1
263        Local solver information for each block is in the following KSP and PC objects:
264        [0] number of local blocks = 1, first local block number = 0
265        [0] local block number 0
266        KSP Object: (mg_coarse_sub_) 1 MPI process
267          type: preonly
268          maximum iterations=1, initial guess is zero
269          tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
270          left preconditioning
271          not checking for convergence
272        PC Object: (mg_coarse_sub_) 1 MPI process
273          type: lu
274            out-of-place factorization
275            tolerance for zero pivot 2.22045e-14
276            using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
277            matrix ordering: nd
278            factor fill ratio given 5., needed 1.
279              Factored matrix follows:
280                Mat Object: (mg_coarse_sub_) 1 MPI process
281                  type: seqaij
282                  rows=3, cols=3
283                  package used to perform factorization: petsc
284                  total: nonzeros=9, allocated nonzeros=9
285                    using I-node routines: found 1 nodes, limit used is 5
286          linear system matrix = precond matrix:
287          Mat Object: (mg_coarse_sub_) 1 MPI process
288            type: seqaij
289            rows=3, cols=3
290            total: nonzeros=9, allocated nonzeros=9
291            total number of mallocs used during MatSetValues calls=0
292              using I-node routines: found 1 nodes, limit used is 5
293        - - - - - - - - - - - - - - - - - -
294      linear system matrix = precond matrix:
295      Mat Object: (mg_coarse_sub_) 1 MPI process
296        type: seqaij
297        rows=3, cols=3
298        total: nonzeros=9, allocated nonzeros=9
299        total number of mallocs used during MatSetValues calls=0
300          using I-node routines: found 1 nodes, limit used is 5
301  Down solver (pre-smoother) on level 1 -------------------------------
302    KSP Object: (mg_levels_1_) 1 MPI process
303      type: chebyshev
304        Chebyshev polynomial of first kind
305        eigenvalue targets used: min 0.160581, max 1.76639
306        eigenvalues estimated via gmres: min 0.394193, max 1.60581
307        eigenvalues estimated using gmres with transform: [0. 0.1; 0. 1.1]
308        KSP Object: (mg_levels_1_esteig_) 1 MPI process
309          type: gmres
310            restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
311            happy breakdown tolerance 1e-30
312          maximum iterations=10, initial guess is zero
313          tolerances: relative=1e-12, absolute=1e-50, divergence=10000.
314          left preconditioning
315          using PRECONDITIONED norm type for convergence test
316        estimating eigenvalues using a noisy random number generated right-hand side
317      maximum iterations=2, nonzero initial guess
318      tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
319      left preconditioning
320      not checking for convergence
321    PC Object: (mg_levels_1_) 1 MPI process
322      type: jacobi
323        type DIAGONAL
324      Vec Object: 1 MPI process
325        type: seq
326        length=16
327      linear system matrix = precond matrix:
328      Mat Object: 1 MPI process
329        type: seqaij
330        rows=16, cols=16
331        total: nonzeros=64, allocated nonzeros=64
332        total number of mallocs used during MatSetValues calls=0
333          not using I-node routines
334  Up solver (post-smoother) same as down solver (pre-smoother)
335  linear system matrix = precond matrix:
336  Mat Object: 1 MPI process
337    type: seqaij
338    rows=16, cols=16
339    total: nonzeros=64, allocated nonzeros=64
340    total number of mallocs used during MatSetValues calls=0
341      not using I-node routines
342KSP Object: 1 MPI process
343  type: gmres
344    restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
345    happy breakdown tolerance 1e-30
346  maximum iterations=10000, initial guess is zero
347  tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
348  left preconditioning
349  using PRECONDITIONED norm type for convergence test
350PC Object: 1 MPI process
351  type: gamg
352    type is MULTIPLICATIVE, levels=2 cycles=v
353      Cycles per PCApply=1
354      Using externally compute Galerkin coarse grid matrices
355      GAMG specific options
356        Threshold for dropping small values in graph on each level =   -1.   -1.
357        Threshold scaling factor for each level not specified = 1.
358        AGG specific options
359          Number of levels of aggressive coarsening 1
360          Square graph aggressive coarsening
361          MatCoarsen Object: (pc_gamg_) 1 MPI process
362            type: mis
363              MIS aggregator lists are not available
364          Number smoothing steps to construct prolongation 1
365        Complexity:    grid = 1.1875    operator = 1.14062
366        Per-level complexity: op = operator, int = interpolation
367            #equations  | #active PEs | avg nnz/row op | avg nnz/row int
368                  3            1              3                0
369                 16            1              4                2
370  Coarse grid solver -- level 0 -------------------------------
371    KSP Object: (mg_coarse_) 1 MPI process
372      type: preonly
373      maximum iterations=10000, initial guess is zero
374      tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
375      left preconditioning
376      not checking for convergence
377    PC Object: (mg_coarse_) 1 MPI process
378      type: bjacobi
379        number of blocks = 1
380        Local solver information for each block is in the following KSP and PC objects:
381        [0] number of local blocks = 1, first local block number = 0
382        [0] local block number 0
383        KSP Object: (mg_coarse_sub_) 1 MPI process
384          type: preonly
385          maximum iterations=1, initial guess is zero
386          tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
387          left preconditioning
388          not checking for convergence
389        PC Object: (mg_coarse_sub_) 1 MPI process
390          type: lu
391            out-of-place factorization
392            tolerance for zero pivot 2.22045e-14
393            using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
394            matrix ordering: nd
395            factor fill ratio given 5., needed 1.
396              Factored matrix follows:
397                Mat Object: (mg_coarse_sub_) 1 MPI process
398                  type: seqaij
399                  rows=3, cols=3
400                  package used to perform factorization: petsc
401                  total: nonzeros=9, allocated nonzeros=9
402                    using I-node routines: found 1 nodes, limit used is 5
403          linear system matrix = precond matrix:
404          Mat Object: (mg_coarse_sub_) 1 MPI process
405            type: seqaij
406            rows=3, cols=3
407            total: nonzeros=9, allocated nonzeros=9
408            total number of mallocs used during MatSetValues calls=0
409              using I-node routines: found 1 nodes, limit used is 5
410        - - - - - - - - - - - - - - - - - -
411      linear system matrix = precond matrix:
412      Mat Object: (mg_coarse_sub_) 1 MPI process
413        type: seqaij
414        rows=3, cols=3
415        total: nonzeros=9, allocated nonzeros=9
416        total number of mallocs used during MatSetValues calls=0
417          using I-node routines: found 1 nodes, limit used is 5
418  Down solver (pre-smoother) on level 1 -------------------------------
419    KSP Object: (mg_levels_1_) 1 MPI process
420      type: chebyshev
421        Chebyshev polynomial of first kind
422        eigenvalue targets used: min 0.160614, max 1.76675
423        eigenvalues estimated via gmres: min 0.393863, max 1.60614
424        eigenvalues estimated using gmres with transform: [0. 0.1; 0. 1.1]
425        KSP Object: (mg_levels_1_esteig_) 1 MPI process
426          type: gmres
427            restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
428            happy breakdown tolerance 1e-30
429          maximum iterations=10, initial guess is zero
430          tolerances: relative=1e-12, absolute=1e-50, divergence=10000.
431          left preconditioning
432          using PRECONDITIONED norm type for convergence test
433        estimating eigenvalues using a noisy random number generated right-hand side
434      maximum iterations=2, nonzero initial guess
435      tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
436      left preconditioning
437      not checking for convergence
438    PC Object: (mg_levels_1_) 1 MPI process
439      type: jacobi
440        type DIAGONAL
441      Vec Object: 1 MPI process
442        type: seq
443        length=16
444      linear system matrix = precond matrix:
445      Mat Object: 1 MPI process
446        type: seqaij
447        rows=16, cols=16
448        total: nonzeros=64, allocated nonzeros=64
449        total number of mallocs used during MatSetValues calls=0
450          not using I-node routines
451  Up solver (post-smoother) same as down solver (pre-smoother)
452  linear system matrix = precond matrix:
453  Mat Object: 1 MPI process
454    type: seqaij
455    rows=16, cols=16
456    total: nonzeros=64, allocated nonzeros=64
457    total number of mallocs used during MatSetValues calls=0
458      not using I-node routines
459