xref: /petsc/src/snes/tutorials/output/ex56_1.out (revision 9a6bacb05cc5a54be4cd68af72f4fbfbc48b3cb3) !
1    Linear solve converged due to CONVERGED_RTOL iterations 5
2SNES Object: 4 MPI processes
3  type: ksponly
4  maximum iterations=1, maximum function evaluations=10000
5  tolerances: relative=1e-08, absolute=1e-50, solution=1e-08
6  total number of linear solver iterations=5
7  total number of function evaluations=1
8  norm schedule ALWAYS
9  Jacobian is never rebuilt
10  KSP Object: 4 MPI processes
11    type: cg
12    maximum iterations=100, initial guess is zero
13    tolerances: relative=1e-10, absolute=1e-50, divergence=10000.
14    left preconditioning
15    using UNPRECONDITIONED norm type for convergence test
16  PC Object: 4 MPI processes
17    type: gamg
18      type is MULTIPLICATIVE, levels=2 cycles=v
19        Cycles per PCApply=1
20        Using externally compute Galerkin coarse grid matrices
21        GAMG specific options
22          Threshold for dropping small values in graph on each level =     0.001     0.001
23          Threshold scaling factor for each level not specified = 1.
24          Using aggregates made with 3 applications of heavy edge matching (HEM) to define subdomains for PCASM
25          MatCoarsen Object: 4 MPI processes
26            type: hem
27            3 matching steps with threshold = 0.
28          AGG specific options
29            Number of levels of aggressive coarsening 1
30            Square graph aggressive coarsening
31            MatCoarsen Object: (pc_gamg_) 4 MPI processes
32              type: mis
33            Number smoothing steps to construct prolongation 1
34          Complexity:    grid = 1.11111    operator = 1.02041
35    Coarse grid solver -- level 0 -------------------------------
36      KSP Object: (mg_coarse_) 4 MPI processes
37        type: preonly
38        maximum iterations=10000, initial guess is zero
39        tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
40        left preconditioning
41        using NONE norm type for convergence test
42      PC Object: (mg_coarse_) 4 MPI processes
43        type: bjacobi
44          number of blocks = 4
45          Local solver information for first block is in the following KSP and PC objects on rank 0:
46          Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks
47          KSP Object: (mg_coarse_sub_) 1 MPI process
48            type: preonly
49            maximum iterations=1, initial guess is zero
50            tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
51            left preconditioning
52            using NONE norm type for convergence test
53          PC Object: (mg_coarse_sub_) 1 MPI process
54            type: lu
55              out-of-place factorization
56              tolerance for zero pivot 2.22045e-14
57              using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
58              matrix ordering: nd
59              factor fill ratio given 1., needed 1.
60                Factored matrix follows:
61                  Mat Object: (mg_coarse_sub_) 1 MPI process
62                    type: seqaij
63                    rows=1, cols=1
64                    package used to perform factorization: petsc
65                    total: nonzeros=1, allocated nonzeros=1
66                      not using I-node routines
67            linear system matrix = precond matrix:
68            Mat Object: (mg_coarse_sub_) 1 MPI process
69              type: seqaij
70              rows=1, cols=1
71              total: nonzeros=1, allocated nonzeros=1
72              total number of mallocs used during MatSetValues calls=0
73                not using I-node routines
74        linear system matrix = precond matrix:
75        Mat Object: 4 MPI processes
76          type: mpiaij
77          rows=1, cols=1
78          total: nonzeros=1, allocated nonzeros=1
79          total number of mallocs used during MatSetValues calls=0
80            not using I-node (on process 0) routines
81    Down solver (pre-smoother) on level 1 -------------------------------
82      KSP Object: (mg_levels_1_) 4 MPI processes
83        type: chebyshev
84          Chebyshev polynomial of first kind
85          eigenvalue targets used: min 0.260428, max 1.43236
86          eigenvalues provided (min 0.413297, max 1.30214) with transform: [0. 0.2; 0. 1.1]
87        maximum iterations=2, nonzero initial guess
88        tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
89        left preconditioning
90        using NONE norm type for convergence test
91      PC Object: (mg_levels_1_) 4 MPI processes
92        type: jacobi
93          type DIAGONAL
94        linear system matrix = precond matrix:
95        Mat Object: 4 MPI processes
96          type: mpiaij
97          rows=9, cols=9
98          total: nonzeros=49, allocated nonzeros=49
99          total number of mallocs used during MatSetValues calls=0
100            not using I-node (on process 0) routines
101    Up solver (post-smoother) same as down solver (pre-smoother)
102    linear system matrix = precond matrix:
103    Mat Object: 4 MPI processes
104      type: mpiaij
105      rows=9, cols=9
106      total: nonzeros=49, allocated nonzeros=49
107      total number of mallocs used during MatSetValues calls=0
108        not using I-node (on process 0) routines
109DM Object: box 4 MPI processes
110  type: plex
111box in 3 dimensions:
112  Number of 0-cells per rank: 8 8 8 8
113  Number of 1-cells per rank: 12 12 12 12
114  Number of 2-cells per rank: 6 6 6 6
115  Number of 3-cells per rank: 1 1 1 1
116Labels:
117  depth: 4 strata with value/size (0 (8), 1 (12), 2 (6), 3 (1))
118  marker: 1 strata with value/size (1 (23))
119  Face Sets: 4 strata with value/size (1 (1), 2 (1), 3 (1), 6 (1))
120  celltype: 4 strata with value/size (0 (8), 1 (12), 4 (6), 7 (1))
121  boundary: 1 strata with value/size (1 (23))
122Field deformation:
123  adjacency FEM
124DM Object: Mesh 4 MPI processes
125  type: plex
126Mesh in 3 dimensions:
127  Number of 0-cells per rank: 27 27 27 27
128  Number of 1-cells per rank: 54 54 54 54
129  Number of 2-cells per rank: 36 36 36 36
130  Number of 3-cells per rank: 8 8 8 8
131Labels:
132  celltype: 4 strata with value/size (0 (27), 1 (54), 4 (36), 7 (8))
133  depth: 4 strata with value/size (0 (27), 1 (54), 2 (36), 3 (8))
134  marker: 1 strata with value/size (1 (77))
135  Face Sets: 4 strata with value/size (1 (9), 2 (9), 3 (9), 6 (9))
136  boundary: 1 strata with value/size (1 (77))
137Field deformation:
138  adjacency FEM
139    Linear solve converged due to CONVERGED_RTOL iterations 8
140SNES Object: 4 MPI processes
141  type: ksponly
142  maximum iterations=1, maximum function evaluations=10000
143  tolerances: relative=1e-08, absolute=1e-50, solution=1e-08
144  total number of linear solver iterations=8
145  total number of function evaluations=1
146  norm schedule ALWAYS
147  Jacobian is never rebuilt
148  KSP Object: 4 MPI processes
149    type: cg
150    maximum iterations=100, initial guess is zero
151    tolerances: relative=1e-10, absolute=1e-50, divergence=10000.
152    left preconditioning
153    using UNPRECONDITIONED norm type for convergence test
154  PC Object: 4 MPI processes
155    type: gamg
156      type is MULTIPLICATIVE, levels=2 cycles=v
157        Cycles per PCApply=1
158        Using externally compute Galerkin coarse grid matrices
159        GAMG specific options
160          Threshold for dropping small values in graph on each level =     0.001     0.001
161          Threshold scaling factor for each level not specified = 1.
162          Using aggregates made with 3 applications of heavy edge matching (HEM) to define subdomains for PCASM
163          MatCoarsen Object: 4 MPI processes
164            type: hem
165            3 matching steps with threshold = 0.
166          AGG specific options
167            Number of levels of aggressive coarsening 1
168            Square graph aggressive coarsening
169            MatCoarsen Object: (pc_gamg_) 4 MPI processes
170              type: mis
171            Number smoothing steps to construct prolongation 1
172          Complexity:    grid = 1.02721    operator = 1.00432
173    Coarse grid solver -- level 0 -------------------------------
174      KSP Object: (mg_coarse_) 4 MPI processes
175        type: preonly
176        maximum iterations=10000, initial guess is zero
177        tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
178        left preconditioning
179        using NONE norm type for convergence test
180      PC Object: (mg_coarse_) 4 MPI processes
181        type: bjacobi
182          number of blocks = 4
183          Local solver information for first block is in the following KSP and PC objects on rank 0:
184          Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks
185          KSP Object: (mg_coarse_sub_) 1 MPI process
186            type: preonly
187            maximum iterations=1, initial guess is zero
188            tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
189            left preconditioning
190            using NONE norm type for convergence test
191          PC Object: (mg_coarse_sub_) 1 MPI process
192            type: lu
193              out-of-place factorization
194              tolerance for zero pivot 2.22045e-14
195              using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
196              matrix ordering: nd
197              factor fill ratio given 5., needed 1.
198                Factored matrix follows:
199                  Mat Object: (mg_coarse_sub_) 1 MPI process
200                    type: seqaij
201                    rows=4, cols=4
202                    package used to perform factorization: petsc
203                    total: nonzeros=16, allocated nonzeros=16
204                      using I-node routines: found 1 nodes, limit used is 5
205            linear system matrix = precond matrix:
206            Mat Object: (mg_coarse_sub_) 1 MPI process
207              type: seqaij
208              rows=4, cols=4
209              total: nonzeros=16, allocated nonzeros=16
210              total number of mallocs used during MatSetValues calls=0
211                using I-node routines: found 1 nodes, limit used is 5
212        linear system matrix = precond matrix:
213        Mat Object: 4 MPI processes
214          type: mpiaij
215          rows=4, cols=4
216          total: nonzeros=16, allocated nonzeros=16
217          total number of mallocs used during MatSetValues calls=0
218            using I-node (on process 0) routines: found 1 nodes, limit used is 5
219    Down solver (pre-smoother) on level 1 -------------------------------
220      KSP Object: (mg_levels_1_) 4 MPI processes
221        type: chebyshev
222          Chebyshev polynomial of first kind
223          eigenvalue targets used: min 0.327489, max 1.80119
224          eigenvalues provided (min 0.133814, max 1.63744) with transform: [0. 0.2; 0. 1.1]
225        maximum iterations=2, nonzero initial guess
226        tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
227        left preconditioning
228        using NONE norm type for convergence test
229      PC Object: (mg_levels_1_) 4 MPI processes
230        type: jacobi
231          type DIAGONAL
232        linear system matrix = precond matrix:
233        Mat Object: 4 MPI processes
234          type: mpiaij
235          rows=147, cols=147
236          total: nonzeros=3703, allocated nonzeros=3703
237          total number of mallocs used during MatSetValues calls=0
238            not using I-node (on process 0) routines
239    Up solver (post-smoother) same as down solver (pre-smoother)
240    linear system matrix = precond matrix:
241    Mat Object: 4 MPI processes
242      type: mpiaij
243      rows=147, cols=147
244      total: nonzeros=3703, allocated nonzeros=3703
245      total number of mallocs used during MatSetValues calls=0
246        not using I-node (on process 0) routines
247[0] 0) N=           9, max displ=2.5713786e+01, error=9.564e+00
248[0] 1) N=         147, max displ=3.1758769e+01, disp diff= 6.04e+00, error=3.519e+00, rate=1.4
249