xref: /petsc/src/snes/tutorials/output/ex56_1.out (revision 6dd63270497ad23dcf16ae500a87ff2b2a0b7474)
1    Linear solve converged due to CONVERGED_RTOL iterations 5
2SNES Object: 4 MPI processes
3  type: ksponly
4  maximum iterations=1, maximum function evaluations=10000
5  tolerances: relative=1e-08, absolute=1e-50, solution=1e-08
6  total number of linear solver iterations=5
7  total number of function evaluations=1
8  norm schedule ALWAYS
9  Jacobian is never rebuilt
10  KSP Object: 4 MPI processes
11    type: cg
12    maximum iterations=100, initial guess is zero
13    tolerances: relative=1e-10, absolute=1e-50, divergence=10000.
14    left preconditioning
15    using UNPRECONDITIONED norm type for convergence test
16  PC Object: 4 MPI processes
17    type: gamg
18      type is MULTIPLICATIVE, levels=2 cycles=v
19        Cycles per PCApply=1
20        Using externally compute Galerkin coarse grid matrices
21        GAMG specific options
22          Threshold for dropping small values in graph on each level =     0.001     0.001
23          Threshold scaling factor for each level not specified = 1.
24          Using aggregates made with 3 applications of heavy edge matching (HEM) to define subdomains for PCASM
25            MatCoarsen Object: 4 MPI processes
26              type: hem
27              3 matching steps with threshold = 0.
28            AGG specific options
29              Number of levels of aggressive coarsening 1
30              Square graph aggressive coarsening
31              MatCoarsen Object: (pc_gamg_) 4 MPI processes
32                type: mis
33              Number smoothing steps to construct prolongation 1
34            Complexity:    grid = 1.11111    operator = 1.02041
35            Per-level complexity: op = operator, int = interpolation
36                #equations  | #active PEs | avg nnz/row op | avg nnz/row int
37                      1            1              1                0
38                      9            4              6                1
39      Coarse grid solver -- level 0 -------------------------------
40        KSP Object: (mg_coarse_) 4 MPI processes
41          type: preonly
42          maximum iterations=10000, initial guess is zero
43          tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
44          left preconditioning
45          using NONE norm type for convergence test
46        PC Object: (mg_coarse_) 4 MPI processes
47          type: bjacobi
48            number of blocks = 4
49            Local solver information for first block is in the following KSP and PC objects on rank 0:
50            Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks
51            KSP Object: (mg_coarse_sub_) 1 MPI process
52              type: preonly
53              maximum iterations=1, initial guess is zero
54              tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
55              left preconditioning
56              using NONE norm type for convergence test
57            PC Object: (mg_coarse_sub_) 1 MPI process
58              type: lu
59                out-of-place factorization
60                tolerance for zero pivot 2.22045e-14
61                using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
62                matrix ordering: nd
63                factor fill ratio given 1., needed 1.
64                  Factored matrix follows:
65                    Mat Object: (mg_coarse_sub_) 1 MPI process
66                      type: seqaij
67                      rows=1, cols=1
68                      package used to perform factorization: petsc
69                      total: nonzeros=1, allocated nonzeros=1
70                        not using I-node routines
71              linear system matrix = precond matrix:
72              Mat Object: (mg_coarse_sub_) 1 MPI process
73                type: seqaij
74                rows=1, cols=1
75                total: nonzeros=1, allocated nonzeros=1
76                total number of mallocs used during MatSetValues calls=0
77                  not using I-node routines
78          linear system matrix = precond matrix:
79          Mat Object: 4 MPI processes
80            type: mpiaij
81            rows=1, cols=1
82            total: nonzeros=1, allocated nonzeros=1
83            total number of mallocs used during MatSetValues calls=0
84              using nonscalable MatPtAP() implementation
85              not using I-node (on process 0) routines
86      Down solver (pre-smoother) on level 1 -------------------------------
87        KSP Object: (mg_levels_1_) 4 MPI processes
88          type: chebyshev
89            Chebyshev polynomial of first kind
90            eigenvalue targets used: min 0.260428, max 1.43236
91            eigenvalues provided (min 0.413297, max 1.30214) with transform: [0. 0.2; 0. 1.1]
92          maximum iterations=2, nonzero initial guess
93          tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
94          left preconditioning
95          using NONE norm type for convergence test
96        PC Object: (mg_levels_1_) 4 MPI processes
97          type: jacobi
98            type DIAGONAL
99          linear system matrix = precond matrix:
100          Mat Object: 4 MPI processes
101            type: mpiaij
102            rows=9, cols=9
103            total: nonzeros=49, allocated nonzeros=49
104            total number of mallocs used during MatSetValues calls=0
105              not using I-node (on process 0) routines
106      Up solver (post-smoother) same as down solver (pre-smoother)
107      linear system matrix = precond matrix:
108      Mat Object: 4 MPI processes
109        type: mpiaij
110        rows=9, cols=9
111        total: nonzeros=49, allocated nonzeros=49
112        total number of mallocs used during MatSetValues calls=0
113          not using I-node (on process 0) routines
114  DM Object: box 4 MPI processes
115    type: plex
116  box in 3 dimensions:
117    Number of 0-cells per rank:   8   8   8   8
118    Number of 1-cells per rank:   12   12   12   12
119    Number of 2-cells per rank:   6   6   6   6
120    Number of 3-cells per rank:   1   1   1   1
121  Labels:
122    depth: 4 strata with value/size (0 (8), 1 (12), 2 (6), 3 (1))
123    marker: 1 strata with value/size (1 (23))
124    Face Sets: 4 strata with value/size (1 (1), 2 (1), 3 (1), 6 (1))
125    celltype: 4 strata with value/size (0 (8), 1 (12), 4 (6), 7 (1))
126    boundary: 1 strata with value/size (1 (23))
127  Field deformation:
128    adjacency FEM
129  DM Object: Mesh 4 MPI processes
130    type: plex
131  Mesh in 3 dimensions:
132    Number of 0-cells per rank:   27   27   27   27
133    Number of 1-cells per rank:   54   54   54   54
134    Number of 2-cells per rank:   36   36   36   36
135    Number of 3-cells per rank:   8   8   8   8
136  Labels:
137    celltype: 4 strata with value/size (0 (27), 1 (54), 4 (36), 7 (8))
138    depth: 4 strata with value/size (0 (27), 1 (54), 2 (36), 3 (8))
139    marker: 1 strata with value/size (1 (77))
140    Face Sets: 4 strata with value/size (1 (9), 2 (9), 3 (9), 6 (9))
141    boundary: 1 strata with value/size (1 (77))
142  Field deformation:
143    adjacency FEM
144      Linear solve converged due to CONVERGED_RTOL iterations 8
145  SNES Object: 4 MPI processes
146    type: ksponly
147    maximum iterations=1, maximum function evaluations=10000
148    tolerances: relative=1e-08, absolute=1e-50, solution=1e-08
149    total number of linear solver iterations=8
150    total number of function evaluations=1
151    norm schedule ALWAYS
152    Jacobian is never rebuilt
153    KSP Object: 4 MPI processes
154      type: cg
155      maximum iterations=100, initial guess is zero
156      tolerances: relative=1e-10, absolute=1e-50, divergence=10000.
157      left preconditioning
158      using UNPRECONDITIONED norm type for convergence test
159    PC Object: 4 MPI processes
160      type: gamg
161        type is MULTIPLICATIVE, levels=2 cycles=v
162          Cycles per PCApply=1
163          Using externally compute Galerkin coarse grid matrices
164          GAMG specific options
165            Threshold for dropping small values in graph on each level =       0.001       0.001
166            Threshold scaling factor for each level not specified = 1.
167            Using aggregates made with 3 applications of heavy edge matching (HEM) to define subdomains for PCASM
168              MatCoarsen Object: 4 MPI processes
169                type: hem
170                3 matching steps with threshold = 0.
171              AGG specific options
172                Number of levels of aggressive coarsening 1
173                Square graph aggressive coarsening
174                MatCoarsen Object: (pc_gamg_) 4 MPI processes
175                  type: mis
176                Number smoothing steps to construct prolongation 1
177              Complexity:    grid = 1.02721    operator = 1.00432
178              Per-level complexity: op = operator, int = interpolation
179                  #equations  | #active PEs | avg nnz/row op | avg nnz/row int
180                        4            1              4                0
181                      147            4             26                3
182        Coarse grid solver -- level 0 -------------------------------
183          KSP Object: (mg_coarse_) 4 MPI processes
184            type: preonly
185            maximum iterations=10000, initial guess is zero
186            tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
187            left preconditioning
188            using NONE norm type for convergence test
189          PC Object: (mg_coarse_) 4 MPI processes
190            type: bjacobi
191              number of blocks = 4
192              Local solver information for first block is in the following KSP and PC objects on rank 0:
193              Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks
194              KSP Object: (mg_coarse_sub_) 1 MPI process
195                type: preonly
196                maximum iterations=1, initial guess is zero
197                tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
198                left preconditioning
199                using NONE norm type for convergence test
200              PC Object: (mg_coarse_sub_) 1 MPI process
201                type: lu
202                  out-of-place factorization
203                  tolerance for zero pivot 2.22045e-14
204                  using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
205                  matrix ordering: nd
206                  factor fill ratio given 5., needed 1.
207                    Factored matrix follows:
208                      Mat Object: (mg_coarse_sub_) 1 MPI process
209                        type: seqaij
210                        rows=4, cols=4
211                        package used to perform factorization: petsc
212                        total: nonzeros=16, allocated nonzeros=16
213                          using I-node routines: found 1 nodes, limit used is 5
214                linear system matrix = precond matrix:
215                Mat Object: (mg_coarse_sub_) 1 MPI process
216                  type: seqaij
217                  rows=4, cols=4
218                  total: nonzeros=16, allocated nonzeros=16
219                  total number of mallocs used during MatSetValues calls=0
220                    using I-node routines: found 1 nodes, limit used is 5
221            linear system matrix = precond matrix:
222            Mat Object: 4 MPI processes
223              type: mpiaij
224              rows=4, cols=4
225              total: nonzeros=16, allocated nonzeros=16
226              total number of mallocs used during MatSetValues calls=0
227                using nonscalable MatPtAP() implementation
228                using I-node (on process 0) routines: found 1 nodes, limit used is 5
229        Down solver (pre-smoother) on level 1 -------------------------------
230          KSP Object: (mg_levels_1_) 4 MPI processes
231            type: chebyshev
232              Chebyshev polynomial of first kind
233              eigenvalue targets used: min 0.327489, max 1.80119
234              eigenvalues provided (min 0.133814, max 1.63744) with transform: [0. 0.2; 0. 1.1]
235            maximum iterations=2, nonzero initial guess
236            tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
237            left preconditioning
238            using NONE norm type for convergence test
239          PC Object: (mg_levels_1_) 4 MPI processes
240            type: jacobi
241              type DIAGONAL
242            linear system matrix = precond matrix:
243            Mat Object: 4 MPI processes
244              type: mpiaij
245              rows=147, cols=147
246              total: nonzeros=3703, allocated nonzeros=3703
247              total number of mallocs used during MatSetValues calls=0
248                not using I-node (on process 0) routines
249        Up solver (post-smoother) same as down solver (pre-smoother)
250        linear system matrix = precond matrix:
251        Mat Object: 4 MPI processes
252          type: mpiaij
253          rows=147, cols=147
254          total: nonzeros=3703, allocated nonzeros=3703
255          total number of mallocs used during MatSetValues calls=0
256            not using I-node (on process 0) routines
257[0] 0) N=           9, max displ=2.5713786e+01, error=9.564e+00
258[0] 1) N=         147, max displ=3.1758769e+01, disp diff= 6.04e+00, error=3.519e+00, rate=1.4
259